I’ve been thinking about ethics and technology recently.
Michael Hiltzik ran a column in today’s Los Angeles Times (5/12/2019) about Tristan Harris, former Google exec and founder of the Center for Humane Technology, about how technology today is actively involved in “human downgrading.”
“Harris’ key insight,” Hiltzik writes, “is that it’s a mistake to treat the drawbacks of mobile technologies and social media as separate and unrelated. In fact, ‘they’re all connected to an extractive attention economy,’ Harris told an audience at the Milken conference [this spring].
“‘When your business model is extracting data and attention out of people,’ Harris said, the result is a ‘race to the bottom of the brain stem’ in which social media platforms feed users more and more of whatever content will keep them onsite. In practice, that means more radical and extreme content that feeds on human weaknesses.”
Turning these thoughts to the world of media, on last week’s Digital Production Buzz, Mark Raudonis, Senior VP at Bunim/Murray, struck a chord as he described dealing with the ever-larger shooting ratios and ever-decreasing deadlines inherent in reality programs today. The solution, for Mark, is automating clip review and, perhaps, automating color grading.
These tools make sense when you are trying to find a one-hour story out of four thousand hours of footage. But, the technology won’t stop there – as Terry Curren, founder of Alpha Dogs post production, said on the same show, “the more technology improves, the more likely the ‘middle-class of post’ will get squeezed out.”
“Historically,” Terry said, “throughout all of written history, if you wanted to entertain people, you were a starving artist. You travelled from town to town and you hoped to make enough doing a play, or whatever, to get some meals and maybe get a night’s sleep.
“Then this weird thing happened where we came along with a film camera and you could record somebody’s performance once and then play it back a ton of times and charge for it and, suddenly, it became a way of making a lot of money as an artist. However, this was a limited world. It cost so much money to make the films and distribute them; which kept it a tight group. This created this artificial community of people making a lot of money as artists.
“What I see now is, we’ve taken away those strangleholds; so now, anybody can make content and get it out there and I just see us going back to a kind of ‘starving artists’ land again.”
While I’m not as pessimistic about the future as Terry, I’m still concerned.
Quoting Tristan Harris again: “What drives [this human downgrading] trend.” Harris told Hiltzik, “is the thirst for advertising dollars, which are dependent on audience engagement.”
“The ‘free’ business model is the most expensive business model ever invented. What does ‘free’ buy us? It buys us free social isolation, free downgrading of our attention spans, free downgrading of democracy, free teen mental health issues. That’s what the business model of maximizing attention has bought us.”
Back in the early days of personal computers, the goal of technology was to empower people. For example, the impact of word processing and spreadsheets was revolutionary and profound.
Now, however, I think the focus has changed. To me, the goal of technology seems to be empowering computers. The Cloud, artificial intelligence, machine learning, big data are all technologies used to make computers smarter.
In the US, we tend to launch new technology, then figure out the societal impact later. For those who’s neighborhoods are awash in electric scooters, you know what I mean. To say nothing about Uber, FaceBook or … well, any tech startup that launches first, then worries about the rules and regulations much later.
However, the situation is different in Europe. The European Commission recognizes AI as one of the 21st century’s most strategic technologies and is increasing its annual investment in AI by 70% as part of the research and innovation program called “Horizon 2020.”
A Horizon 2020 press release stated that “the EU [is setting] a strong regulatory framework for technology ethics that will set the global standard for human-centric and trustworthy AI.
“To this end, the EU Commission has set up a high-level expert group and tasked it with drafting AI ethics guidelines as well as preparing a set of recommendations for broader AI policy. According to the guidelines, three components are necessary in order to achieve ‘trustworthy AI’:
We can’t stop the onrush of technology, but we can take the time to think about the results of what we are creating. The Law of Unintended Consequences still applies. Personally, I think the ethics of technology will become a major issue during the next few years – especially as deadlines continue to tighten, budgets continue to shrink, and jobs start to disappear.
NEW & Updated!
Edit smarter with Larry’s latest training, all available in our store.