One of the recurring themes on discussions about the risks associated with AI is that the sheer quantity of AI sourced content will be overwhelming and logistically difficult to check for accuracy and efficacy. This gives risk to the risk of misinformation. The scale of results from an AI prompt can be a magnitude of times much greater than we would ever get a chance to review in full.

Ironically I typically include wikipedia links in most of my blog posts. I do this for convenience and also because it allows anyone else to check the sources saves me time as the citations are over there. I can’t see any way that Wikipedia can stop the various AI tools using Wikipedia in the same way but I’d guess there would also be unparsed information in the mix too.

I am personally very familiar with running database queries on a source database. Being able to do so allows one to solve problems and find out more about the data than ever in a very much compressed and specific way. The diagram shows an example of a high level flowchart of a SQL query. For AI the query process includes asking “prompts” which are framing questions but instead of a limited number of variables being executed against  specific database it is much wider.

Using AI to process and sort data is a bit like using a database query but the target database is open ended in a way that makes even describing the content universe almost impossible. In theory the AI data source can be almost infinite.

I was thinking about this in regard to Wikipedia. Wikipedia has a relatively small group of active editors who contribute out of proportion to their actual size. Their diligence and passion for their cause more than makes up for the small numbers. Numbers are considerable but not compared to the actual scale of Wikipedia.

With 321 different language versions, more than 32 million active editors, and an average of 600 new articles a day on English Wikipedia alone, Wikipedia is massively popular.

No Original research is a criticism of some wikipedia entries. Check that definition here. The key idea is this>

Wikipedia articles must not contain original research

With AI ( if I understand this correctly) it may not be easy (or even possible) to check the original sources of the content so easily.  With Wikipedia there is a template for citations and at least a set of defacto standards. Anything that is outside those gets flagged to readers. And even if some of the content is questionable at least the flags are helpful.

In earlier days I used to like reading Peter Drucker. I understand some criticisms of his books was that sometimes he just “interviewed his typewriter”. That is some of his musings and theories came from informal observations, anecdotes and fireside chats with friends. Now when he was growing up his parents had quite the set of friends and very influential visitors to the home for meals. I would have loved to have been at one of those dinner parties. In my view early exposure to erudite and engaging discussions of that nature greatly helped Drucker to develop his own authorial voice.

Over the next 70 years, Drucker’s writings would be marked by a focus on relationships among human beings, as opposed to the crunching of numbers. His books were filled with lessons on how organizations can bring out the best in people

Another author who I admire was Bruce Chatwin. He was able to mash up fiction and non-fiction plus notes into a very readable style. So much so that readers admired his writing even though some of it was poetic license. It turns out his secret weapon was his editor who wrangled much of his copy into eminently readable short chapters and very much contributed to his success. Susannah Clapp who was one of his editors wrote

I had written the reader’s report on the book. It had dazzled and worried me. It was exceptional – but it was enormous and it didn’t flow. I became his editor, with the task of making the book speed along….For me, his great gift – on the page and in person – was visual generosity. He made you see different things and look at things differently.

We don’t hear about editors and I don’t have one myself ha ha. But I have worked with editors and it is really a delight to be able to have feedback and input into written content. Of course we try to self edit and the fact that I have been writing for 55 years now is definitely a bonus.

Robert Reich I like his writing too. I do think that there are rare individuals like him, Drucker and maybe even ex Secretary of state Kissinger whose longevity and depth of experience makes them exceptional at decoding some of the patterns of theory from real life situations. That they have had ringside seats at some of the circuses makes their observations possibly more insightful. Certainly in my view anyway. However we shouldn’t give them a pass as theory and actual research catches up with their contributions we can re-assess based on the new data that we have.

Neutron Jack was widely admired by some at the time. He was extremely influential on corporate culture or lack of it.. He earned himself a lengthy list of criticisms. In the longer term view it seems like decisions he made “on behalf of the company” were much more closely aligned to his own pockets. That is he was in a powerful position to negotiate his own rewards and salary packages at levels that arguably damaged management thinking for a decade or more. I mention him because his biases had huge influence but were not really questioned at the time because Wall St investors liked it.

Popular knowledge and tastes do change as do the ways we learn about them. If TV and media celebrates a business leader it becomes much harder to check for bias. Fashion is part of the bias set. I suspect the same will be true for AI sourced data.

In his 2022 book The Man Who Broke Capitalism, journalist David Gelles argues that Welch’s practices, including financialization, downsizing and mergers and acquisitions, have caused widespread damage to GE and many other large corporations and have contributed to the massive increase in income inequality in the United States since the 1980s

On the plus side there may be situations where the sheer capability of AI to parse an almost infinite array of data sources and other content to sort and articulate theories that might be very useful. If anyone is writing a set of prime directives like ethical framework for AI all of these biases need to be taken into account so that misinformation is minimised.

Discover more from DialogCRM

Subscribe now to keep reading and get access to the full archive.

Continue reading