“Are algorithms the answer?” “You would think that having more data on a question would give better insight in knowing how to address it.”

************************ 

I went to the doctor to see whether she could do something to retard the increasing neuropathy in my feet. I made an appointment for a specific time. I was a few minutes early. I sat, waiting for 20 minutes. Thankfully, I brought something to read, shielding me from the options of watching medical infomercials or reading tattered magazines with titles such as: “The Ten Warning Signs of Impending Health Doom” or, worse yet, reading old issues of Better Homes and Gardens

“Doc, “I’m experiencing neuropathy in my feet.”

“I’m not the doctor. The doctor will be with you in a few minutes. Please stand here while I get your height and your weight. How old are you? What is your sex? Did you fill out the questionnaire about your health history? Does anyone in your family have health problems? We’d like to take a blood sample. Can we take a blood sample? Okay, thanks. Someone will be in to take a sample of your blood shortly.”

“Hello. My name is Alfreda. I’ll be taking your blood sample today.” Are you a drug addict? No? Ha ha. Yes, I see you have really good veins. Would you like me to poke you in the left arm or the right? You’ll feel a slight sting.”

Ten minutes later the doctor shows up. “Hello. I’m Doctor McGillicuddy. How are you today? Let me take a look at this chart. I see here that you haven’t had your flu shot this year. Will you get a flue shot? I’ll give you a script for a flue shot. And it appears you haven’t had a shingles shot, either. I’ll write you a script for a shingles shot. When did you have your last colonoscopy? Okay, I’ll write you up a script for that, too. I think that about covers it.”

“Uh, doc, I actually came in because I’m experiencing tingling in my feet.” 

“Oh? Okay. Here’s a script so you can see a specialist about that. You’ll need to make your own appointment.”

“Do you have a recommendation for who I should see?”

She looked at me like I had just said my favorite food was squid guacamole. “You go to the website of your insurance provider and select an authorized podiatrist. I’m sure you’ll be able to find someone to see you within the next two months. Have a nice day.”

This is the doctor’s office, driven by algorithms. Who writes the algorithms? It seems like it must be a combination of demographic analysts and insurance agencies. The result is that when I go to see the doctor I am invisible. Only my demographics are being considered for diagnosis. I’m sure this approach provides some good results. Getting a flue shot or a shingles shot or a colonoscopy are all probably good ideas. But the doctor seemed taken by surprise that I had specific information that might inform diagnosis. I had disrupted the algorithm. She was not entirely unresponsive but, in fact, took minimal interest in helping me pursue the issue that was actually bothering me, while taking a great deal of interest in problems that were “likely” to be causing me problems. The doctor had become the algorithm and had learned to leave her brain at home.  

This may not be medicine at its worst, but it’s pretty low. I mean, if I want to be ignored, I can stay home and do it for free. 

*****************

“Until recently, many of the questions around Tik Tok focused on it data security” as a Chinese-owned company. But with an estimated 90 million users in the U.S., there is more concern about its algorithms “driving users into rabbit holes of potentially harmful content about depression, sexual abuse, drug use, and eating disorders.” – John McKinnon, WSJ.

********************

Sometimes I get polls in the mail. Invariably it’s a special interest group that wants a donation. For a little fun, avoid looking at the source of the poll. Take the poll first and then guess. You will be able to guess who has provided the poll, even if you’ve never heard of the organization. I mean, you will be able to tell where it stands on the political spectrum and you will be able to tell its pet policy concerns. 

Why is this true? Because of the issues that are raised, there will be a certain selection of questions, along with the peculiar absence of relevant but inconvenient questions to the pollster. Then there will be loaded language, such as, would you choose this horrible option or would you choose this obvious and wonderful option. And, finally, they will frame your answers. They will give you five options to choose from, none of which are the answer you would choose. If you have a nuanced response to any of the questions, forgettaboutit! No nuance permitted. It’s a black-and-white world, you should know. 

These fund-raising polls are exercises in fishing for ideological sympathizers with a few loose bucks, but even serious polls suffer from many of these same problems. Collecting good data is very hard. What percentage of people can you get to take the time to seriously respond to a poll? And on what basis will you decide that those who respond do not create an inherent bias, simply because they are the unusual ones who will take the time?

**********************

Polls are maddening. They are always asking, “Should we black?” or “Should we white?” when the answer is almost always, “We should some shade of gray”. So we are often left with statistical “proofs” that are nonsense. Are polls so bad because, by design, they must be simplistic? Or is it that their originators are  manufacturers of propaganda. Garbage in; garbage out.

*********************

“A software bug in Facebook’s algorithm was discovered to be amplifying misinformation and other harmful content, said Alex Heath in The Verge. In October, Facebook engineers said they began noticing that rather than ‘suppressing posts from repeat misinformation offenders, the News Feed was instead giving the posts distribution, spiking views by as much as 30 percent globally’ essentially reversing the decisions of Facebook’s own fact-checkers. The glitch didn’t affect Facebook’s other moderation tools but it did expose vulnerabilities in the algorithms the company was using to ‘downrank,’ or suppress, ‘borderline’ content—such as clickbait and political ads—in its News Feed. (from THE WEEK)

*********************

“The effort to bypass content-moderation filters on social media has given rise to a new form of internet-driven language, said Taylor Lorenz in the The Washington Post. ‘“Algospeak” is becoming increasingly common across the internet’ to avoid getting posts removed or down-ranked by the machine-based algorithms that power platforms such as TikTok, YouTube, Instagram, and Twitch. ‘In many online videos, it’s common to say “unalive” rather than “dead,” “seggs” instead of “sex,” or “SA” instead of “sexual assault”.’ Some influencers have created Google docs ‘with lists of hundreds of words they believe moderation systems deem problematic,’ regardless of the context. ‘You have to say “saltines” when you’re literally talking about crackers,’ said one Twitch creator, since Twitch considers ‘cracker’ to be a slur.” (from THE WEEK)

***********************

I remember when computers began to be the rage in my workplace. Computers were becoming the rage in all the workplaces at the time. As our jobs became heavily flavored with data input it occurred to me that we had arrived at the place where we could make incredibly fast decisions about the most irrelevant subjects. We could tell you how many linear feet of caulk were installed by our weatherization program in census tract 279 between October 1 and October 31 of 2005, for example.

One thing I knew was that there was a lot of important work to be done in our workplace and we never were able to keep up with the critical needs. This didn’t keep us from our trivial pursuits, though. The Funders on High knew what they wanted us to do. For them, it was important that we looked smart. Actually being smart was of minor concern. 

********************

Who will decide when a human is a human? Some say it is when the heartbeat can be detected. Some say it is when brain activity can be detected. Some say it is when the fingers can be detected. These choices are all arbitrary, of course. They are expressions of impressions from the outside, aided by technology. Technology changes and what can be detected changes. They are useful measurements, of course. If the heartbeat is normally detectable at about 6 weeks, if it can’t be detected at 10 weeks, this is a strong indicator that the unborn child is failing to thrive. But what does the inability to detect a heartbeat at five weeks tell us? It tells us nothing. The planet has something on the order of 7 billion human inhabitants. We were unable to detect a heartbeat at 5 weeks for a single one of them. And yet they live. It is current medical practice to use data collection to determine when a human becomes a person. (Or is it when a person becomes a human? I haven’t been able to parse the meaning behind this verbal smoke-blowing.) And yet, the data collection is irrelevant to the question. Garbage in; garbage out. 

*********************

“There’s no hard-and-fast definition but we all know what the Facebook Age is. Its central hallmark is rapid, algorithmically driven dispersal of content, which gives preference to the vapid and the polarizing. The Facebook Age is not just Facebook; it is Twitter and TikTok and a million more imitators waiting in the wings. And it is our id; the algorithms bring to the fore what no one wants to admit to searching for. The Facebook Age is the vicious cycle of ideas shaken and stirred by online mobs, then often laundered by the media, and then injected back into social media to repeat the cycle.” – Mark Gimein, Managing Editor, THE WEEK

********************

“Not long ago, Facebook announced that it was changing its algorithms to make sure users see the type of news they want in their feed. The result of this was that fake news and conspiracy theories gained more traction, as different communities isolated themselves from each other. Because no one saw anything they didn’t want to see, they simply became more convinced of their own views—including the most dubious, idiosyncratic, or downright nutty of them. Facebook’s response was to try to find ways to incorporate user feedback to police or grade content: X is a reliable source; Y isn’t. But, to no one’s surprise, liberals who saw pro-life content were likely to unfollow or block the source and to dispute the content of the posting. Conservatives responded the exact same way to content promoting gun control. Facebook’s test ended up with a variety of important topics cordoned off as ‘Hate speech’.

“Which is to say, there will not be an easy tech fix for our quandary. Moral dilemmas can’t be resolved by a computer. More quantitative power doesn’t inexorably solve fundamentally qualitative problems.” – Ben Sasse 

*********************

Artificial intelligence is fast becoming a part of being both a consumer and an employee. Apply for a credit card or mortgage and many banks will use AI to weigh your credit-worthiness. Apply for a job and the employer could use AI to rank your application. Call a customer-service number and AI technology might screen and

route your call.

“AI refers to techniques that allow computers to learn, reason, infer, communicate and make decisions that in the past were the sole province of humans. Yet as AI technology spreads, so do concerns about its accuracy and fairness. Experts say it can have built-in racial, gender and age biases that could, for instance, rule out certain qualified people for jobs, or force some creditworthy borrowers to pay higher rates than otherwise. This has prompted calls for regulation, or at least greater transparency about how the systems work and their shortcomings.” -Bart Ziegler in WSJ

*************************

What can we say about algorithms? First of all, we all make use of them. Whenever we make decisions we review our own experiences, even if unconsciously. We consider other sources that have commented on the question, steering toward the trusted sources, while steering away from sources that seem toxic. We take into account current circumstances. We do these things routinely and continuously because that’s what it means to have a brain. Of course, we also make bad decisions sometimes because we lack information or  because we give inappropriate weight to certain information. Example, “I know the odds are really bad for me to gamble my $1000 on the current State lottery but just consider the return if I win! I could walk away with Five million dollars!” So we survive because of our good use of simple algorithms, but we also are capable of making truly terrible decisions. This is human nature. 

There are those who think the solution to this problem is to lean more on algorithms for our decision-making. Sometimes this makes sense. If the question is highly technical in nature and I am in need of technical input to make a better decision, then algorithms are likely to be a good option. 

But there are important reasons for caution. The first is that algorithms are created by humans. Continually tested algorithms will, little doubt, provide good-to-great information. But humans can be sloppy about continuous testing. Of even greater concern, humans can be profoundly evil sometimes. It may be that some very smart programmer has a deep hate for someone of my nationality or my sex or my race or my religion or my politics or my favorite sports team. Someone may “juice” the data. Garbage in; garbage out. 

Additionally, many decisions are not about purely technical questions. If you are involved in medicine, which is fundamentally technical, you are also, to a surprisingly constant degree, involved in medical ethics. This involves questions relating to the value of human life, when life begins and ends, etc. Who will judge what algorithms are to be applied to you on such questions. “Oughts” permeate our lives. Deferring the oughts of our lives to algorithms created by individuals whose presuppositions are profoundly different than our own, well, this is a recipe for systemic injustice. Whenever we are at an intersection that calls for us to consider having other people make decisions for us, it is a time to slow way down. Even AI algorithms demand faith on the part of those who put their lives in the hands of the algorithm. As Tim Keller has said, “Strong faith in a weak branch is fatally inferior to weak faith in a strong branch.” Perhaps it is the greatest possible insult to those pushing for the use of algorithms to suggest any involvement of faith. Algorithms take all the religion and superstition and cultural bias out of decision-making. It would be difficult to come up with a better example of blind faith.