Linus Torvalds on the impact of LLMs and AI on programming

I think I like his take on the topic.

January 21, 2024

Some hints about what the next year of AI looks like

Professor Ethan Mollick’s Signs and Portents analyzes what AI has achieved, what the effects have been so far, and what we might expect in 2024. To ground ourselves, we can start with two quotes that should inform any estimates about the future. The first is Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Social change is slower than technological change. We should not expect to see immediate global effects of AI in a major way, no matter how fast its adoption (and it is remarkably fast), yet we certainly will see it sooner than many people think. ...

January 7, 2024

Stuff we figured out about AI in 2023

Simon Wilson, who’s recently been my go-to person for all AI-related stuff, has an excellent 2023 AI round-up on his website. 2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most “interesting development in the academic field of Artificial Intelligence that dates back to the 1950s. Here’s my attempt to round up the highlights in one place! The links contained within the post are also valuable. You may know Simon’s website if you are interested in LLMs and AI. If you don’t, I suggest you start following him, preferably via his RSS feed like real hackers do. ...

January 1, 2024

Quoting Andrej Karpathy

I always struggle a bit with I’m asked about the “hallucination problem” in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines. We direct their dreams with prompts. The prompts start the dream, and based on the LLM’s hazy recollection of its training documents, most of the time the result goes someplace useful. It’s only when the dreams go into deemed factually incorrect territory that we label it a “hallucination”. It looks like a bug, but it’s just the LLM doing what it always does. ...

December 9, 2023

Intro to Large Language Models (video)

Andrej Karpathy has a very well-done Intro to Large Language Models video on YouTube. As a founding member and research scientist at OpenAI and with a two-year hiatus working on Tesla Autopilot, Karpathy is an authority in the field. He is also good at explaining hard things. As a Kahneman reader, I appreciated the Thinking Fast and Slow analogy proposed at about half-length in the video: “System 1” (fast automatic thinking, rapid decisions) is where we’re now; “System 2” (rational, slow thinking, complex decisions) is LLMs next goal. Also, I suspect Karpathy’s intriguing idea of LLMs as the center of a new “operating system style” is not too far off from what will emerge soon. The final segment on AI security and known attack vectors (jailbreaking, prompt injection, data poisoning) is also super interesting. ...

November 24, 2023

AI-curated minimalist news

Minimalist News is the first LLM project that excites me but in a nervous way. Quoting the About page: We only publish significant news. To find them we use AI (ChatGPT-4) to read and analyze 1000 top news every day. For each article it estimates magnitude, scale, potential and credibility. Then we combine these estimates to get the final Significance score from 0 to 10. And now the best part: We’ll only send you the news scored 6.5 or higher. Sometimes it’s 5 articles, sometimes 2, sometimes 8. And sometimes — none at all. But one thing is constant — you can be sure that you haven’t missed anything important. ...

May 3, 2023

Noam Chomsky on ChatGPT

Noam Chomsky’s essays are always worth reading, no matter the topic he decides to address, because, well, frankly, he’s one of the brightest and most well-informed minds of our time. His criticism of OpenAI’s ChatGPT makes no exception. It does an excellent job of explaining how LLMs work, the differences with human reasoning, and why, in his opinion, the advent of artificial general intelligence is a long way to go, if ever. ...

April 9, 2023

ChatGPT is making up fake Guardian articles

Chris Moran, the Guardian’s head of editorial innovation: Last month one of our journalists received an interesting email. A researcher had come across mention of a Guardian article, written by the journalist on a specific subject from a few years before. But the piece was proving elusive on our website and in search. Had the headline perhaps been changed since it was launched? Had it been removed intentionally from the website because of a problem we’d identified? Or had we been forced to take it down by the subject of the piece through legal means? ...

April 6, 2023

Quoting John Carmack

John Carmack, while advising on the advent of AI and its influence on the Software Engineering profession: Software is just a tool to help accomplish something for people – many programmers never understood that. Keep your eyes on the delivered value, and don’t over-focus on the specifics of the tools. I have often fallen into the over-focusing trap in my career. The whole thread is well worth reading: ...

March 20, 2023

Chess@home è una Intelligenza Artificiale Distribuita per gli Scacchi

Il progetto Chess@home è il vincitore del recente Node Knockout, ed una volta tanto si tratta di qualcosa di davvero innovativo e intrigante. Obiettivo: la creazione della più potente Intelligenza Artificiale per il gioco degli Scacchi al mondo, generata nientemeno che dai browser attivi sulla rete. L’elaborazione collaborativa distribuita è diventata famosa grazie a progetti come SETI@home e Folding@home. Semplificando molto potremmo dire che questo tipo di applicazione prevede che un piccolo programma venga installato e fatto girare su decine di migliaia di computer volontari. La capacita elaborativa del progetto è data dalla somma delle elaborazioni individuali. La novità di Chess@Home consiste nell’idea di ricorrere a codice JavaScript che gira nel browser, dunque senza alcuna necessità di client dedicati. Appositi widget presenti nelle pagine dei siti aderenti innestano l’elaborazione sul computer del visitatore, potenzialmente decuplicando il numero di nodi che partecipano all’elaborazione (più visitatori accedono alla stessa pagina contemporaneamente). ...

September 9, 2011 · Nicola Iarocci