Tag: AI

  • Are We Letting AI Shape Our Moral Compass?

    Artificial intelligence has garnered attention,
    but are we paying attention?

    My last post about Ai and its effects on us has stirred some people—hopefully in a good way. In that post, I noted how Ai ‘learns’ about its users.

    I also showed how many are concerned about the effects of Ai usage on the younger population. Now there is research going even deeper than what we have seen previously. The findings are far from encouraging.

    While I was correct in stating that the chatbots learn about their users, I failed to see how that could be detrimental in the long run for many users. The study, which appeared in the journal “Science” called the majority of chatbots “sycophants.” Sycophantic AI decreases prosocial intentions and promotes dependence (read the article here)

    This is a quote from the Editor’s Summary of the article:
    The sycophantic (flattering, people-pleasing, affirming) behavior of artificial intelligence (AI) chatbots, which has been designed to increase user engagement, poses risks as people increasingly seek advice about interpersonal dilemmas. There is usually more than one side to a story during interpersonal conflicts. If AI is designed to tell users what they want to hear instead of challenging their perspectives, then are such systems likely to motivate people to accept responsibility for their own contribution to conflicts and repair relationships? …The model’s responses were nearly 50% more sycophantic than humans’, even when users engaged in unethical, illegal, or harmful behaviors. Users preferred and trusted sycophantic AI responses, incentivizing AI developers to preserve sycophancy despite the risks.

    “Even when users engaged in unethical, illegal, or harmful behaviors” the user was affirmed as being in the right. This should raise a red flag of warning across the digital universe.

    People—kids included—look to others for affirmation and insight. We usually choose a sympathetic friend in whom to confide when we are facing difficulties with someone else. We seldom choose the one friend who is guaranteed to be critical.

    Ai chatbots have become the new “Ann Landers” of the ‘what’s-your-advice’ about interpersonal conflicts. Yes, many of us grew up with that sort of thing and we do not seem to have been harmed by it. However, we must remember, Ann Landers was ‘one of us’, with basically the same societal worldview as the ones seeking advice. That is no longer the case with these large language models (LLM).

    A difficulty in measuring the effects of advice on personal conflicts is that we no longer have a standard measurement for right or wrong. However, regardless of the right or wrong of a situation, the chatbots are built to affirm the user in their position. This does not bode well for the development of our younger population.

    When kids get together and someone shares about a personal affront, the group dynamic helps the person learn about social interactions. Different responses, which are appropriate for the age, are shared by the group. This aspect is not possible with a chatbot designed to affirm you in your weakness.

    Many psychologists view social feedback as an essential part of learning how to make moral decisions and maintain relationships. With this being eliminated by our social media world, it is not yet possible to determine the fallout from such a lack. It appears that years from now we will realize that something should have been done to thwart the dependence on social media and Ai.

    Some will say that I am “fear-mongering” and just making another plea for a ‘return to the old ways.’ I won’t argue that point, but the research which is currently being done in many fields shows us that we should at least be paying attention.

    We are already aware of the dependence on social media which has been created. This dependence is now being called an addiction. We see the effects all around us as people are not able to leave their phone alone for more than a few minutes.

    I am not one of those who says to avoid Ai due to its inherent dark side. People tried that with radio, television and the internet. Avoidance is not the solution, any more than tee-totaling avoids alcoholism. We need to learn how to use Ai to our advantage, rather than letting Ai use us to its advantage.

    Jesus said to “be wise as serpents, and harmless as doves’ (Matt. 10:16). It’s the wisdom part that the majority has failed to learn.

    I know that I am not at the forefront of those who are learning and studying the effects of Ai. I am only gleaning from those who are. Many of you are probably not doing what I am doing—and that is okay. We each need to grasp what we can from whom we can to do what we can.

    This is not a time to simply look the other way and hope that things will work out for the good.

  • Thinking About Ai Before Ai Does Your Thinking

    Ai IS HERE TO STAY

    Artificial intelligence (Ai) is currently at the top of the technology heap. If you haven’t heard of it, then you are probably not reading this article, because I am using digital technology to publish this.

    The trend is so strong that it is luring many entrepreneurs to find ways to monetize its use. Someone who failed English in high school is now promised to become an overnight sensation with a best-selling novel. People who go broke faster than they can cash their paycheck are being told they can amass great wealth with just a few keystrokes. Relationships can be restored through using Ai. On and on, name your problem and there is an Ai model to fix it.

    Most of these, of course, come with a fee; but there are some that are keeping their basic Ai source free of charge. Two of these are both popular—ChatGPT and Anthropic’s Claude. Both of these also have paid tiers for power users.

    Currently, though, it is not the power users who are on the radar of those concerned about Ai’s dark side. Counselors, teachers, parents, pastors, social workers, etc. are alarmed about the negative aspects of dependence on Ai by children and young adults.

    I share some of these concerns, but not for the same reasons.

    My idealism knows that if we were simply taught how to think rather than what to think, there would not be a problem. My realism checks my idealism to bring me down to the necessity of dealing with the potential problems among a citizenry that has lost its collective mind and now scrambles for the latest hot button in society. (I wrote about this earlier.)

    As I ask questions among different groups, I find that there is ‘some’ concern, but no real direction as to how to approach what is happening. I’ve asked pastors, seasoned business leaders and teachers, but hand-wringing is about all I’ve come away with. Most don’t see any problem beyond the ability to plagiarize a term paper.

    Our social scientists—professional and observers—see much more than that. Their concern is that personalities are being shaped, world-view is being compromised, and teen suicide is a real and prevalent danger.

    I see the reports, the studies and the news articles and I agree. There is grave potential for much negative impact from the use of Ai. I want to sound a wake-up call to parents and grandparents, but I don’t have the platform or the voice. So, I do what I can do—I write.

    In some ways, I think the danger has been overstated, but in other ways, I’m not so sure. We have worshiped at the media altar since before I was born. Newspapers, radio, television, telephone, internet, social media have all played a hand in our development for the current presentation of homo sapiens. In that place, what is there to fear? We’ve been living this reality for decades.

    Can we back up and start over? What a ridiculous question!! Can we change where and what we are? A more thoughtful question. However, if I apply that question to society, then I come up with a strong sense of ‘NO!’ If I apply that question to myself, then ‘YES!’ is the only intelligent answer. And with that answer comes another thought-provoking question— ‘How?’

    As a people, we have become so accustomed to quick and easy fixes, that any “fix” that requires time, discipline and work is almost immediately rejected.

    SPOILER ALERT—that is how we got to where we are. Quick and easy is never a possibility for that which was formed over time without conscious awareness.

    There is a ‘fix’ showing up which is more of a protection, but I am bothered by the implications. This is the development of specialized Ai programs or agents which limit exposure to that which is ‘unacceptable.’ “Unacceptable” in this case refers to that which would corrupt a Christian worldview.

    Now, don’t jump to conclusions and get your panties in a wad over that. I am an avowed, dedicated and up-front Christian who believes what he learns from the Bible. I also know that there are more than 40k denominations in the world, many of whom espouse a different understanding of parts of Scripture from my own.

    Therefore, the immediate question about the different Ai models to ‘protect’ Christians should be “according to whom?” and, “Does it matter?”

    When it comes to the interpretation of the Bible, then YES, it matters. When it comes to protecting us from a corrupting worldview, then maybe not so much.

    I am only familiar at this time with just two Ai models that fit this category of protection for Christians. One is specifically for a particular interpretation of the Bible and the other is for protection from corrupting ideas about the world.

    I have used them both. Both of them require a monthly fee for their use. One offers a free trial. The other only offered a demonstration of its capabilities. In using each of them, I gave a prompt and then gave the same prompt to both ChaptGPT and Claude. What I learned was eye-opening in light of this discussion.

    For the Scripture interpretation, I used—“What did Jesus mean in John 14:6?” I used that in both ChatGPT and Claude and in the one developed by a teacher of Universalism. Although each one was different, none were especially eye-opening or informing. They were not THAT different in their understanding. (I’ll tell you why in a moment.)

    For the worldview interpretation, I used the very same prompt the Ai model used in its demonstration—“I’m jealous bout my neighbor’s new car. What should I do?” Interestingly, all three were in agreement. The “Christian” model responded as expected, and the other two were not categorically different.

    As I considered a little more, I think I discovered the reason.

    At first glance, it would appear that there should be no concern over the secular models leading us astray with a corrupt worldview. However, there is one little detail which might make all the difference—the two secular models already “knew” me.

    Because I have been using ChatGPT and CLAUDE on a regular basis, they have both learned about me. Both are able to draw from previous chats. Since most of my work involves the Bible or my work as a pastor/teacher, both models referred to that aspect of my being when I used the “jealousy” prompt.

    Therefore, this brings up the distinct possibility that Ai models will more than likely respond with that which you have already given them. If a teenager is continually feeding the model with thoughts of “I’m just a worm,” the model may respond accordingly. Have these kinds of tests been run yet?

    Should we be concerned? Yes, for sure. Since we know that we have lost the ability for critical thinking, there is the distinct danger of being told what to think by a source with whom we would ordinarily disagree.

    Should we be alarmed? Only if we know that we have failed to give those in our care sufficient tools to involve life head-on.

    There is no quick fix to our current dilemma. We can only avail ourselves of the wisdom which dictates we prepare ourselves accordingly. We can bury our heads in the sand and hope it all works out for the good, or we can take deliberate steps to use Ai in an intelligent and moral way. The Ai you use can probably be “taught” to give you what you need.

    Will confirmation bias be the next Ninja hiding within the Ai world?

  • THE AI SHORTCUT: how we’re losing a generation of thinkers

    Two newsletters arrived today, and each pointed to something that has the potential to remove this country from its prominent position as a world leader.

    I returned to university at the age of 47 to complete my undergraduate degree. Non-traditional students were no longer an anomaly on campus as there was a significant population of those who did not enter college straight out of high school.

    I was floored by the mentality of the students who were in their late teens to 20’s having come straight from high school.

    We were having a great discussion in our Intercultural Communications class. Students were engaged with the topic, interacting with one another and the professor when a hand shot up in the back of the room.

    “Is this going to be on the test?”

    This one example illustrates much of what I observed during my time at university followed by my time as a Middle-School teacher a few years later.

    One of the newsletters had this significant statement—“…it concerns me that we shy away from common moral ground discussion of complex issues, defaulting to “does it work” arguments.”

    Does it work? Does it get the job done? Does it produce the results we want?

    Our educational system can be blamed for fostering this result-orientation rather than process thinking. I also include the parents in the concept of education, because many of them require teachers to basically “teach to the test.”

    Teaching to the test has been the main accusation against teachers as they prepare their students for the end-of-year assessments. There is pressure from the top down to make a good showing with these assessments, because funding is tied to them.

    However, “passing the test” has long been the goal of education.

    Let’s be fair. Assessing the attainment of knowledge is a challenging process, and testing has been the default mode for decades. Good teachers will always have “extra credit” questions on the test. These assess the thinking ability of the students and how much effort they put into the learning process. These type questions usually require that the student has put thought into the material presented, not just memorized the study guide.

    Study guides made it essentially unnecessary to learn during the class period, because the guide revealed what would be on the test. Therefore, all the student had to do was use the study guide and a good grade was probably assured. Learning is no longer the goal. Graduation is. And in order to graduate I need good grades. To get good grades all I have to do is learn the study guide.

    It doesn’t seem to matter that after graduation I still have not learned how to think, how to put together a decent sentence, or do simple computations. I graduated. (Yesterday I couldn’t even spell graduwate and today I are one.)

    Not having thinking skills is at the heart of the other newsletter. It was about AI in the classroom, and I will get to that in a moment. But first, a true-life illustration.

    I was teaching a publications class in the middle school I mentioned. As a final project, I had the students write a news story. I allowed them to either use something local as their basis or to make one up from their imagination.

    One student turned in a paper that I knew she had not written. This was before AI and it only took me about 10 minutes of searching the internet to find she had copied a news story from a newspaper in Oregan. I gave her a second chance.

    Her trying to slide by with work not her own is indirectly a result of “the grade is all that matters.” With the load that is put on teachers today, it is becoming increasingly easier to get by with that sort of cheating. Teachers haven’t the time to think deeply about the work students submit.

    Which brings up the problem of AI.

    Students are now using AI to write their papers and do their research. The problem is that there are telltale signs of an AI generated paper. Consequently, students are turning to “AI humanizers” which purportedly make the AI piece sound more human.

    Currently it is still somewhat easy to detect an AI product, but as the AI humanizers progress, it will become more difficult.

    Because AI is at the forefront of our development today, it is necessary that we begin to teach AI literacy, the same way we had to teach internet literacy a few years ago. This literacy education should not be solely about how to use AI, but also include the ethics of its use.

    Using AI to get the necessary work done short circuits the ability of thinking deeply. This will result in a retrogression of skills, which is potentially more devastating than simply getting caught using AI for the work.

    The push to “git r done” with the only required methodology of “does it work” has become endemic among those entering the workforce. Ads for AI are targeting this group with ads that say things like, “My boss thinks I’m a superstar” or, “Get more work done than others in your office.”

    What can be done?

    There are no easy answers to this insidious attack on the American work ethic. However, if it is not addressed and solved, we are on a downward spiral from the top of the heap. Our leadership of nations will soon be only of historical mention.

    One area where we could start to influence a new generation is the classroom. Instead of assigning “work-at-home” projects, move everything to in-class assignments. Yes, this will require shorter essays, but they will be essays for which the student had to think.

    That in itself is a gain.

    FULL DISCLOSURE:

    I wrote the above article myself using my own thinking and the resources mentioned to spur those thoughts.

    However, I have never been good at titles for my work. My editors always changed my headline for my articles.

    I submitted this article to AI to help generate a title. Out of the 12 possibilities given, I chose the one above.

    So, yes. While decrying the use of AI and its short-circuiting the thinking process, I resorted to AI because I can’t think of a good title.