Linus Torvalds on the impact of LLMs and AI on programming
I think I like his take on the topic.
I think I like his take on the topic.
NTS Radio is a family of like-minded and passionate individuals, dedicated to supporting exciting music and culture through online radio and events. NTS uncovers the best of the musical past, celebrates the present and cultivates the future of the underground music scene, and prides itself on being open-minded and experimental. (source) I’m certainly a latecomer, but NTS Radio is the bomb. I have not opened Spotify (whose algorithm I find dull and repetitive) in the last week, not even once. I go straight to NTS to find the best and most diverse music from all genres, played by DJs and independent radio stations worldwide. Absolute banger. ...
Upon his death in 1543 in Frombork, Poland, Copernicus was buried in the local cathedral. Over the subsequent centuries, the location of his grave was lost to history. There were several unsuccessful attempts to locate Copernicus’s remains, dating as far back as the 16th and 17th centuries. Another failed attempt was made by the French emperor Napoleon after the 1807 Battle of Eylau. Napoleon held Copernicus in high regard as a polymath, mathematician and astronomer. In 2005, a group of Polish archaeologists took up the search. ...
The goal of the pg_rman project is to provide a method for online backup and PITR that is as easy as pg_dump. Also, it maintains a backup catalog per database cluster. Users can maintain old backups including archive logs with one command. We’ve always been doing our Postgres backups the rudimentary way via pg_dumpall, which works and is purely logical (one can restore across different Postgres versions), but pg_rman maintains a catalog and has point-in-time recovery. ...
Professor Ethan Mollick’s Signs and Portents analyzes what AI has achieved, what the effects have been so far, and what we might expect in 2024. To ground ourselves, we can start with two quotes that should inform any estimates about the future. The first is Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Social change is slower than technological change. We should not expect to see immediate global effects of AI in a major way, no matter how fast its adoption (and it is remarkably fast), yet we certainly will see it sooner than many people think. ...
Simon Wilson, who’s recently been my go-to person for all AI-related stuff, has an excellent 2023 AI round-up on his website. 2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most “interesting development in the academic field of Artificial Intelligence that dates back to the 1950s. Here’s my attempt to round up the highlights in one place! The links contained within the post are also valuable. You may know Simon’s website if you are interested in LLMs and AI. If you don’t, I suggest you start following him, preferably via his RSS feed like real hackers do. ...
For those who don’t know me, I’m a demographer. I study population. And my first love in fantasy was, of course, Middle Earth. How many people live in Middle Earth? Being a demographer, I was mainly interested in the data side of things. Tolkien is frustratingly vague about population. He almost never gives us estimates of settlement sizes, and many of the larger metropolises of Middle Earth (like Pelargir) never actually appear on the page. Sizable armies make frequent appearances, yet because his adventurers almost exclusively traverse the wilds of Middle Earth, we rarely see where those soldiers are coming from. ...
The Guardian’s The Winterkeeper: A Lifetime Spent Protecting Yellowstone National Park is a beautiful short documentary I truly enjoyed watching. A little research on Steven Fuller, the protagonist, allowed me to dig out some promising reading material.
Andrej Karpathy has a very well-done Intro to Large Language Models video on YouTube. As a founding member and research scientist at OpenAI and with a two-year hiatus working on Tesla Autopilot, Karpathy is an authority in the field. He is also good at explaining hard things. As a Kahneman reader, I appreciated the Thinking Fast and Slow analogy proposed at about half-length in the video: “System 1” (fast automatic thinking, rapid decisions) is where we’re now; “System 2” (rational, slow thinking, complex decisions) is LLMs next goal. Also, I suspect Karpathy’s intriguing idea of LLMs as the center of a new “operating system style” is not too far off from what will emerge soon. The final segment on AI security and known attack vectors (jailbreaking, prompt injection, data poisoning) is also super interesting. ...
Quoting Jan van den Berg: This weekend we learned that Bram Moolenaar had passed away at the age of 62. And this news affected me more than I expected. Like so many: I did not know Bram personally. But I’ve been using a tool made by Bram for more than half my life — at least weekly, sometimes daily. That tool is a text editor. The best one there is: Vim. ...