It's Time to Rethink the Algorithm

Published:
When there is a disconnect between the intentions of their creator and the recipient, when algorithms go awry, they can have disastrous consequences.
Originially published on Adweek.com by Gene Richardson

200,000 years of human evolution, in which we discovered fire and invented the wheel, have brought us to the next frontier: what Wikipedia calls “a process or set of rules to be followed in calculations or other problem-solving operations.”

It doesn’t sound especially exciting, but algorithms have shaken up the world of computing. Of course, they have always been there under the hood–algorithms are simply the rules that govern how our software operates behind the scenes.

But now the algorithm is stepping out from behind the curtain. More and more of the services we use depend on it. Take your Amazon recommendations, for example. A recommendation algorithm mines your browsing and buying history to serve products you just have to buy. We all love the gods of logic for it.

But for all of the benefits logic-driven services bestow on us, there is also a dark side. When there is a disconnect between the intentions of their creator and the recipient, when algorithms go awry, they can have disastrous consequences.

Facing pressure over claimed political ideologies of its news promotion, Facebook axed its entire human editorial staff, replacing the presentation with an algorithmically driven agenda, designed to mix top trends with individualize relevance.

Algorithms, unfortunately, can’t identify fake news stories as effectively as humans can. Shortly after, the platform was rife with fatuous stories, more partisan and deceitful than anything the editorial team had been accused of.

This double-edged sword wasn’t just suffered by the world of journalism, but by the advertising industry, as well. If you ever searched online for a pair of rain boots, bought them offline and found those boots following you around the web weeks after purchase, you have been the victim of a “retargeting” algorithm, which wastes advertiser dollars and hurts your user experience.

Algorithms’ creators are well-meaning enough. But, as with all technology, it is the human design behind them that can help them soar or sink.

Research shows that people learn when they question what’s going on around them, when they are able to assimilate from a broad set of available circumstances and experiences. But digital information that is served narrowly, based only on what computers think we want to read, serves no one.

At the heart of this algorithmania is an obsession with dataism–a rapid and calculating exploitation of the digital breadcrumb trail we lay as we go, and a growing belief that this trail offers everything technologists need to serve back attractive services and products.

But it doesn’t. What’s becoming clear to me and many others in the industry is that algorithms only get us so far. We need something else to be present in the decisions about what we see, what we consume, what we buy and how we grow.

We don’t go to see art exhibits painted by the hands of robots. We don’t rock out at concerts played by central processing units. For sure, algorithmic and generative art exists, and much of it is great. But on the whole, when it comes to matters of the human condition as revealed through cultural expression, we prefer to seek edification from other warm bodies.

Cultural products made by humans using technology are the perfect symbiosis, and I think that is the model to which we now need to return.

It is time to rethink algorithms, to put a driver back at the wheel of decision-making. Our routines of logic have proven themselves amazing deliverers of a subset of services and destructive miscreants when it comes to vital others.

Now they need help. Algorithms alone should not dictate user experience. When crafting software in the future, developers should more carefully consider their own logical deficiencies, acknowledge that they can’t pre-program the world and accept a helping hand from the rest of us.
7
2,076 Views

Comments (3)

Aaron Whittieryoung web designer

Commented:
Why would you believe Wikipidea anyways?
You've identified a problem that has been known for thousands of years. Fundamentally it's arrogance and stupidity. Bright young things think they have the answer! But examination shows the problem is far more complex, far more basic, and difficult to solve. If it were easy it would already have been solved! It's easy to cry wolf (and extremely valuable) but the real contribution is to propose a solution. Where's yours? A plea for inclusion in the decision making process doesn't wash. No one invites outsiders into the inner sanctum.

The solution exists in disciplined method. Information systems academic professionals have a short history (30 to 50 years) of extraordinary science from some brilliant minds. We have a full repertoire of how to properly address systems development. Unfortunately it is not applied well. If you were a structural engineer or an electrical engineer, or whatever, you would be well measured in the work you produce. We don't do this, so we allow crap to be unloaded on our consumers. We have the metrics. We don't apply them. That is the answer to our defective development.
Most Valuable Expert 2011
Author of the Year 2014

Commented:
Gene: This might be worth a read.  
https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html?_r=0

Calling this "algorithms" is like calling the car a horseless carriage.  It misunderstands the potential, and even as early attempts at the automobile were inept and full of missteps, so too are our early attempts at machine learning and artificial intelligence.  It took Google, with all their computers and brainpower, until 2012 before they could identify a cat in a video.  But the same technology that can learn what a cat "looks like" can formulate a response, can be used to locate tumors in MRI data, or turn the wheel of a self-driving car.  You might be interested in trying out Tensorflow.  "Don't worry, you can't break it."
http://playground.tensorflow.org/

To understand the way machine learning works you need little more than a basic understanding of the first derivative in differential calculus.  Detective and predictive computation is all about minimizing error with gradient descent along multidimensional slopes.  As simple as it seems today, we only got workable AI tools in 2016.  If the world had looked on the 1886 patent 37435 in 1887 and let its shortcomings shape their vision of its future, we would probably still be riding horses.

AI and machine learning are different from human learning, as is illustrated by this image that was created when we tried to turn the cat-detection algorithms around and get them to generate a picture of a cat.  We don't really know how machine learning works in any given instance.  It's unpredictable to the human mind because it's about detecting patterns in large amounts* of data, something we cannot do very well, and something that machine learning, suddenly, has come to grasp.  I am presently experimenting and writing about this field, and I guarantee it is revolutionary, a black swan of the magnitude of the atom bomb, and perhaps the greatest scientific advancement of our lifetimes.
A numerically generated cat* Google analyzed literally millions of cat videos before their algorithms could even find a cat.  And then they drew this monster.  Obviously we're only on the threshold.

Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.