To even the most casual reader it seems like every day there is a news story about some wonderful thing that AI is doing for us or perhaps doing to us and we should be very wary because this is a bad thing and so on. The need to have a story angle for much of the reporting means that us or them potential conflicts is very much a driver of the reporting so far.

Having worked in technology sectors for 30+ years now I am not really surprised at the current hype cycle / fear cycle. New technology can be intimidating and even when much of it has been around for years it still can take some kind of massive earthquake type impact before it really changes lives in the way that we think it might.

An example  – I’ve been working remotely for more than 20 years. And yes the more recent advent of high speed fibre internet has made this much easier but one of the most amazing changes from the recent Covid-19 pandemic has been the wide spread acceptance of remote work by the managerial class. Because what has held back remote work and tele-commuting for decades has been management  not wanting  it. And now the genie is out of the bottle many companies are finding their staff don’t want to go back to factory style micro management and older 19th C attitudes to work. Remote work has changed; the power dynamic of managers and staff – not so much.

In a way this pre-empts my thinking about AI, automation generally and all of the vague forecasts around technology driven change. Unless the changes are adopted at the social fabric level then the landscape stays very much the same for most of us.  It is also a truism to say that  technology changes many things in very pervasive way and not always for the best.

In my working life I have spent years invested in the area of working with business engines of various kinds. Most of the software I have worked with has been complex and designed to map processes and to systematise specialist knowledge in almost every vertical market you can care to name. Usually this is so that business owner / operators can monetise transactions that flow from those processes at every point in the business cycle. I have worked on ERP systems ( big end accounting systems) and many CRM systems which are more on the sales end of the cycle.

Always the overall objective is to model processes and support systems to be able to codify, quantify, count and predict from those activities what the results will be. And to enable speed, efficiency and scale as well as knowledge sharing within company networks.

In my view since the rise of PC’s in about the mid 80’s when some of those systems got a bit more memory that has been the long term arc. Of course the average mobile phone now has more computing power than most of those early PC’s ever did but in those days I worked as a productivity management consultant and we had spreadsheets for everything.

In some ways what we were trying to do with spreadsheets to model business systems in the 80’s just got more sophisticated as time went by. More bandwidth, faster machines and better education have all contributed to a kind of technology myth making. If only we had the latest software and the fastest computers and so on. AI hysteria seems very similar.

In 2023 ironically we are just repeating some of this utopian mania again. Now of course we think the tech can actually replace many of the staff and rather than question the ethics of that idea we seem intent on some bizarre kind of singularity where machine becomes us.

I like Naomi Klein and while her observations about ChatGPT are predictable they worth a read. She quite rightly points out that the real scale of the problem is very much at the level of assumptions and ideology. If any new system supports and extends the dominant economic systems of the day and those systems are very flawed in terms of societal outcomes that we should be very wary of that.

In AI machines aren’t ‘hallucinating’. But their makers are by Naomi Klein

“Warped hallucinations are indeed afoot in the world of AI, however – but it’s not the bots that are having them; it’s the tech CEOs who unleashed them, along with a phalanx of their fans, who are in the grips of wild hallucinations, both individually and collectively. Here I am defining hallucination not in the mystical or psychedelic sense, mind-altered states that can indeed assist in accessing profound, previously unperceived truths. No. These folks are just tripping: seeing, or at least claiming to see, evidence that is not there at all, even conjuring entire worlds that will put their products to use for our universal elevation and education.”

…..

“There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.

And as those of us who are not currently tripping well understand, our current system is nothing like that. Rather, it is built to maximize the extraction of wealth and profit – from both humans and the natural world – a reality that has brought us to what we might think of it as capitalism’s techno-necro stage. In that reality of hyper-concentrated power and wealth, AI – far from living up to all those utopian hallucinations – is much more likely to become a fearsome tool of further dispossession and despoilation.”

….

“A world of deep fakes, mimicry loops and worsening inequality is not an inevitability. It’s a set of policy choices. We can regulate the current form of vampiric chatbots out of existence – and begin to build the world in which AI’s most exciting promises would be more than Silicon Valley hallucinations.

Because we trained the machines. All of us. But we never gave our consent. They fed on humanity’s collective ingenuity, inspiration and revelations (along with our more venal traits). These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances.”

“For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

…..

“Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.”

When AI thinks you’re dead – Ethan Zuckerman

“The ways large language models get things wrong can be fascinating. These systems are optimised for plausibility, not accuracy. If you ask for academic writing, it will output text complete with footnotes, because the writing the response is modelled on contains footnotes. But search for the papers cited and you may discover that they do not exist, as librarians have found when dismayed students seek their help in locating papers that ChatGPT has simply invented.

Scholars of machine learning refer to these errors as “hallucinations.”

……

ChatGPT may well make search systems better. But it is essential that we interrogate such tools, to avoid inadvertently reinforcing biases in the process of adopting a new technology whose powers can seem almost magical.”

A twitter discussion based on this article is also very much on point.

Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them

“MW: I think it’s stunning that someone would say that the harms [from AI] that are happening now—which are felt most acutely by people who have been historically minoritized: Black people, women, disabled people, precarious workers, et cetera—that those harms aren’t existential.

 

What I hear in that is, “Those aren’t existential to me. I have millions of dollars, I am invested in many, many AI startups, and none of this affects my existence. But what could affect my existence is if a sci-fi fantasy came to life and AI were actually super intelligent, and suddenly men like me would not be the most powerful entities in the world, and that would affect my business.”

………………

“it’s distracting us from what’s real on the ground and much harder to solve than war-game hypotheticals about a thing that is largely kind of made up. And particularly, it’s distracting us from the fact that these are technologies controlled by a handful of corporations who will ultimately make the decisions about what technologies are made, what they do, and who they serve.”

I suspect that the truth will be even more banal. In an echo of my work experience where I sold more than a million dollars worth of CRM and ERP related software over several years to large corporates. The reason those companies bought the software was the background narrative promises that such software hints at. They thought it would automate their businesses to the point where they would gain a raft of benefits and a version of that is absolutely true.

But just like all those sales pitches for all singing and all dancing workflow complex software we know that it is only as good as the actual implementation on each site. Many times a version of the software did well when those businesses focussed on the behavioural changes they needed to support staff and embed and and incentivise and align work with values that are shared by the staff and the company.

On the other hand many times only a fraction of the advantages were ever realised by business and I suspect much the same will happen with AI. It is just another wave in the automation cycle and ongoing love / hate relationship we have with technology as a society.

Real society is very nuanced. I have worked for brief periods with and for advertising agencies and they are fascinating because they are very much trying to decode the human software of rituals and feelings on a day to day basis. What works for an advertising campaign in the UK or the US or Australia may not work in New Zealand or Sweden. As a very famous person once said of English speaking nations – “we are divided by a common language.”

But advertising people who are all intricately involved with the art of persuasion will be all over ChatGPT because it will only be successful when guided by true creatives. I suspect that like the mythical Difference Engine of novelists William Gibson and Bruce Sterling we are entering a parallel time line of alternate universes and alternate history. Image credit comes from the cover of that novel.

Any system regardless of whether it is ChatGPT, other kinds of automation or lower level task management software needs to be much more culturally aligned and nuanced to be that helpful. This doesn’t mean that we won’t see positive examples of ChatGPT being used in say legal discovery processes or some else like that or annoying and less helpful implementations being used at insurance companies for actual tasks but in 2023 we are very much still in the hype cycle part of the topic.

digital upskilling – Tom Fishburne the Marketoonist

“As I heard recently, it’s not so much the risk of AI coming for your job as the risk of someone who knows how to use AI coming for your job.”

By the way you should all subscribe to Tom’s updates as he has a way of summarising complex ideas about marketing in very savvy and useful ways. I’ve been tracking him for more than 10 years now and his cartoons and blog posts are very tight.

P.S. I’m reminded of earlier conversations about VR and other innovations. When we use these tools to augment the human experience and temper that with ethical considerations we will get a much better result for humanity.

And this

 

Discover more from DialogCRM

Subscribe now to keep reading and get access to the full archive.

Continue reading