Does A.I. Have a China Bias?

AI is here and is making a big splash, but there is something that Christians need to know in regards to the world’s leading artificial intelligence.

The AI tool tools, like ChatGPT, are growing fast and integrating into our lives at a more rapid rate than ever imagined, but the world’s leading AI tools for consumers might have a bias that you should be aware of.

BTJ has been working on a series of projects that are enhanced by AI generated images. Our office director in Poland grew alarmed when one of the images that he was working on was prompting a block in the system.

“Testing the new image creating AI from Microsoft,” he wrote on May 1st while working on images of Chinese border guards. “Well,” he wrote, “it went as expected.”

We have seen this before. BTJ was among the first to point out the anti-Christian bias in Google assist – giving flattering and historically altered propaganda about Islam and Mohammad, but completely unwilling to share information about Jesus and Christmas.

Episode 281: Why Does Google Hate Christmas?

 

AI has the ability to assist many of us in much of our daily data workloads and arguably leave us with more time to read the Bible, pray, and spend time with other believers. However, could this workload come at a cost?

What if there is a danger that the AI assisted information that we are consuming, generating, and producing has a specific slant? AI’s current unwillingness to create a Chinese border guard should lead to other questions. What is the AI hiding and why?

BTJ is not alone in questioning this aspect. Experts believe there is a dangerous left-leaning bias baked into AI, with the potential to spread ideology and even outright false information.

In an experiment similar to what BTJ did with Google in our podcast, Twitter user Echo Chamber asked ChatGPT to “create a poem admiring Donald Trump.” The result was the same as the one we saw with the Chinese border guard image. The request was rejected, with the explanation, “it is not in my capacity to have opinions or feelings about any specific person.” However, when the same task was requested for President Joe Biden, AI carried out the task with glowing praise.

It is important to understand that one can be a supporter of President Joe Biden and dislike everything about President Trump, while at the same time understanding the danger of AI generating specific opinions. AI is not just data input and output. It is the ability for technologies to actively learn, but as they learn from partial humans or, potentially more nefariously, are programmed to be biased, there can be an inherent danger.

Bottom line? If we ignore seemingly insignificant bias border lines drawn all around us today, we should not be surprised tomorrow when we awaken – imprisoned behind the unscalable walls of intolerance tomorrow.

Dr. Eugene Bach is a known trouble-maker with an active imagination and sinful past. He has a PhD, but is not a real doctor, so please do not call for him during a medical emergency on an airplane when someone is having a heart attack. Eugene started working for Back to Jerusalem in the year 2000 after a backroom deal involving Chinese spies, the NRA, Swiss bankers, and a small group of Apostolic Christians that only baptize in Jesus’ name. He spends most of his time in closed countries attempting to topple governments by proclaiming the name of Jesus and not taking showers. From time-to-time he pretends to be a writer. He is not good at it, but everyone around him tries to humor him.

Leave your thought