Linus Torvalds on the impact of LLMs and AI on programming
I think I like his take on the topic.
I think I like his take on the topic.
Professor Ethan Mollick’s Signs and Portents analyzes what AI has achieved, what the effects have been so far, and what we might expect in 2024. To ground ourselves, we can start with two quotes that should inform any estimates about the future. The first is Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Social change is slower than technological change. We should not expect to see immediate global effects of AI in a major way, no matter how fast its adoption (and it is remarkably fast), yet we certainly will see it sooner than many people think. ...
Simon Wilson, who’s recently been my go-to person for all AI-related stuff, has an excellent 2023 AI round-up on his website. 2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most “interesting development in the academic field of Artificial Intelligence that dates back to the 1950s. Here’s my attempt to round up the highlights in one place! The links contained within the post are also valuable. You may know Simon’s website if you are interested in LLMs and AI. If you don’t, I suggest you start following him, preferably via his RSS feed like real hackers do. ...
I always struggle a bit with I’m asked about the “hallucination problem” in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines. We direct their dreams with prompts. The prompts start the dream, and based on the LLM’s hazy recollection of its training documents, most of the time the result goes someplace useful. It’s only when the dreams go into deemed factually incorrect territory that we label it a “hallucination”. It looks like a bug, but it’s just the LLM doing what it always does. ...
Andrej Karpathy has a very well-done Intro to Large Language Models video on YouTube. As a founding member and research scientist at OpenAI and with a two-year hiatus working on Tesla Autopilot, Karpathy is an authority in the field. He is also good at explaining hard things. As a Kahneman reader, I appreciated the Thinking Fast and Slow analogy proposed at about half-length in the video: “System 1” (fast automatic thinking, rapid decisions) is where we’re now; “System 2” (rational, slow thinking, complex decisions) is LLMs next goal. Also, I suspect Karpathy’s intriguing idea of LLMs as the center of a new “operating system style” is not too far off from what will emerge soon. The final segment on AI security and known attack vectors (jailbreaking, prompt injection, data poisoning) is also super interesting. ...
Minimalist News is the first LLM project that excites me but in a nervous way. Quoting the About page: We only publish significant news. To find them we use AI (ChatGPT-4) to read and analyze 1000 top news every day. For each article it estimates magnitude, scale, potential and credibility. Then we combine these estimates to get the final Significance score from 0 to 10. And now the best part: We’ll only send you the news scored 6.5 or higher. Sometimes it’s 5 articles, sometimes 2, sometimes 8. And sometimes — none at all. But one thing is constant — you can be sure that you haven’t missed anything important. ...