A Trip Down The AI Rabbit Hole — Part 3

My top 3 lessons on artificial intelligence from Dario Amodel and Gary Marcus


Even a non-technical person, which I most definitely am, can’t ignore the technology. I will probably never understand all the inner workings of artificial intelligence, or AI. But its economics, possible business applications and safety risks are areas that even a tech layman can comprehend.

Dario Amodel is the founder and CEO of Anthropic, the developer of Claude family of large language models. Gary Marcus is a psychologist and cognitive scientist who has written extensively about AI and argues in favour of getting it regulated. Gary’s new book “Taming Silicon Valley: How We Can Ensure That AI Works for Us” is scheduled to be published tomorrow (and no, I am not earning affiliate income from this link).

Here are my 3 takeaways after watching recent interviews with both of them on YouTube:

Lesson #1

It is a bit premature to talk about various business models for AI since there is no killer app yet and user cases are still few.

A plausible forecast for the industry suggests that it will be dominated by 4 to 5 firms and, possibly, a few government-owned players. The reason for that are extremely high costs of building, training and running a model.

It may not be enough to provide a universal benefit to generate high returns. Solar energy is a good example — there is a clear benefit for the society created by it but companies in the sector are struggling to be profitable.

The economics of AI can be compared with those of heavy industry, like steel production. There is an enormous upfront cost to build and train a model. On top of that come the inference costs (costs incurred for a trained LLM model to draw conclusions from new data) which could eventually be even higher than those of training the model.

Lesson #2

When the most dire doomsday scenarios are put aside, other significant risks created by AI remain — misinformation, and deep fake and covert racism, to name just a few.

Misinformation is the first that comes with downstream consequences — be it election or market manipulation. The existing LLM models use probability distribution for inference but to avoid misinformation you need to perform fact-checking in addition to that.

Regulation is not a threat to the industry; nor will it stop innovation. Airlines, for example, are thriving while this is a heavily regulated industry at all levels. But it is the regulation that has made flying safer than driving your car. Similarly, regulation is needed for AI to protect against the worst downside — misuse, abuse and politics.

Lesson #3

One of the less talked about effects of AI is that it compresses the skill differential. Code writing that is performed by AI improves the productivity and output quality of average and weaker code writers. At the same time, the highest-skilled coders find AI of limited or little use. With AI enabling average coders to perform better the value of skills the best coders have developed becomes lower.

This is not dissimilar from what happened during the Industrial Revolution when artisan manual wool weaving was replaced with machines. The quality did decrease somewhat in the process but it was well compensated by the dramatic increase in productivity. Suddenly there was no need for that many artisans and their apprentices any longer.

Again, history shows that humans are exceptionally good at adapting. If it is AI that can write 90% of the code faster and better, humans will accept that and are likely to adapt and become exceptionally good at writing the remaining 10%.

And, as a final thought, GenAI is not the only one but just a path to developing artificial intelligence. What we are currently seeing, Marcus believes, looks more like a dress rehearsal for bigger things that will come in 5 or 10, or whoever knows how many years.

For those willing to select top lessons of their own here are the links to interviews with Dario and Marcus.

This note was originally published on Medium.com on 16 September 2024.