This text belongs to ‘Artificial’, the newsletter that Delia RodrÃguez will send to La Vanguardia subscribers every Friday. If you want to receive it, sign up here.
Dear readers, welcome to the first issue of ARTIFICIAL, La Vanguardia’s AI newsletter.
A few weeks ago I spoke in this newspaper about a tweet that impressed me. I haven’t been able to find the name of the author or the original message, but I remember him saying that the smartest career move to make right now post-ChatGPT is to take a gap year and spend it learning about artificial intelligence. Unfortunately, that is not within my reach, but I can use the old journalists’ and bloggers’ trick: write about it, because to understand something there is nothing like forcing yourself to explain it.
For this reason, and so that at least one revolution catches us looking in the right direction, we started this newsletter. It will arrive on Fridays to maintain the fantasy that we end the work week with a summary of what happened. The enormous volume of information, which barely allows you to keep up to date, is only one of the problems associated with this moment, not even the most important. We recently talked about that IAnxiety. How to choose the right sources if we have just become interested in the subject? How to break the language barrier when the best information is in English and tends to be very technical?
A few days ago, Casey Newton, the author of Platformer, had some relevant questions about how to report on AI. A journalist usually begins by asking the experts, he says, but on this issue those who know the most have opinions that can be extreme. Should we look at the new toys that, after years of obscure research, companies have finally made available to us? How do you do that if you suspect that a fatality may occur? Shouldn’t you be mentioning danger all the time? Meta-reflection is important, Newton says, because not long ago tech journalism erred by being blinded by big platforms, with disastrous social consequences.
For the moment, I have consulted how some of the people I know who are developing better criteria are getting information about Artificial Intelligence. Carmen Pacheco tells me something sensible: “Right now there are people in her room doing incredible things 24/7. Most of them won’t go anywhere, but suddenly one will explode and it will be something no expert could have predicted. I don’t know if we are all going to die or if the tsunami will be surfable, but something has already started that cannot be stopped. That’s why I just find out what’s happening and I don’t read predictions because it seems like a waste of time. In a week a new branch of the matter appears and they become obsolete.
These are the recommendations:
– AI Atlas, by Kate Crawford. “His famous thesis of hers is that artificial intelligence is neither intelligent nor artificial. First, because it depends on the efforts of thousands of people. And second, because it depends on us exploiting a lot of absolutely organic material with a fairly large environmental impact.”
– God Human Animal Machine, by Meghan O’Gieblyn. “It is a book that thinks about what it is to think, about what intelligence is, why we have so much generosity when it comes to valuing artificial intelligence and yet so little when it comes to recognizing that of the rest of the species â€.
– The Alignment Problem, by Brian Christian. “One of the crucial problems is how we design an intelligence capable of sustaining our values. That it can ignite itself without destroying democracy or society.”
– Automation and the future of work, by Aaron Benanav. “One of the promises/threats of AI is to end jobs. The book is an in-depth investigation of what has happened in the countries where automation has been implemented and what results it has hadâ€.
What happened this week: