Arizona State University will allow use of AI generators for law school applications

July 30, 2023 • 1:15 pm

I’m not aware of any university that explicitly allows students to use “bots” (AI generators such as ChatGPT) to prepare their applications, but it was only a matter of time.  This announcement from the law school of Arizona State University explicitly allows students to use AI in their applications, most likely in their application essays. Click to read, but I’ve reproduced the meat of the announcement below:

Most of what they say:

The Sandra Day O’Connor College of Law at Arizona State University, ranked the nation’s most innovative university since 2016, announces that applicants to its degree programs are permitted to use generative artificial intelligence (AI) in the preparation of their application and certify that the information they submit is accurate, beginning in August 2023.

The use of large language model (LLM) tools such as ChatGPT, Google Bard and others has accelerated in the past year. Its use is also prevalent in the legal field. In our mission to educate and prepare the next generation of lawyers and leaders, law schools also need to embrace the use of technology such as AI with a comprehensive approach.

“Our law school is driven by an innovative mindset. By embracing emerging technologies, and teaching students the ethical responsibilities associated with technology, we will enhance legal education and break down barriers that may exist for prospective students. By incorporating generative AI into our curriculum, we prepare students for their future careers across all disciplines,” says Willard H. Pedrick Dean and Regents Professor of Law Stacy Leeds.

. . . Our Center for Law, Science, and Innovation (LSI) has been leading the field in the understanding and expansion of technology in law since its establishment 30 years ago. Nearly every field within the law now involves interactions with technology that is rapidly changing and evolving. Lawyers comfortable dealing with the scientific and technological aspects underlying many legal issues are in high demand worldwide. Artificial intelligence, along with its related technologies, has quickly emerged as one of the most fundamental technologies affecting all aspects of our lives and the law today, one that LSI has been examining closely for many years.

We are embracing this technology because we see the benefits it may bring to students and future lawyers. Generative AI is a tool available to nearly everyone, regardless of their economic situation, that can help them submit a strong application when used responsibly.

Now why are they doing this? They give a couple of reasons, the most unconvincing being that the law school has always embraced “the expansion of technology in law”, and this is a new form of technology; familiarity with it can help the students. (That doesn’t mean, however, that you have to use it in an application essay!)  Also, they argue that using AI can help students “submit a strong application when used responsibly.”  I have a sneaking suspicion that this is being done as a DEI initiative, as it says that “Generative AI is a tool available to nearly everyone, regardless of their economic situation.”

But that makes it counterproductive, because it takes away from the admissions committee any judgment about whether a student is able to write. Isn’t that part of judging an application—seeing whether a student can write a coherent essay?  Now everyone can write a coherent essay because the bot will do it for them! The result of using bots is that the differential writing abilities of the students will be minimized, and I can’t imagine what answer the school would have to that except that “we WANT everybody to write on a level playing field.”

At least ASU LAW  still requires the Law School Admissions Test, as well as grade-point averages and this stuff:

. . . . . quality and grading patterns of undergraduate institutions, previous graduate education, demonstrated commitment to public service, work and leadership experience, extracurricular or community activities, history of overcoming economic or other disadvantages, uniqueness of experience and background, maturity, ability to communicate, foreign language proficiency, honors and awards, service in the armed forces, and publications.

Note the “history of overcoming economic or other disadvantages,” which surely comes straight from the recent Supreme Court decision banning affirmative action. But note as well that you’re supposed to have a good “ability to communicate”.  How can you show that if you’re using a bot?

 

h/t: Luana

The Chat bot vs. the editor-in-chief of Science

May 15, 2023 • 12:30 pm

As you know, Science’s editor-in-chief Holden Thorp has been dragging his magazine into inappropriate realms of ideology (see my posts here), and of course his power as editor of America’s most famous science journal keeps some scientists from speaking up about this.

Reader Jay thought he’d do an experiment and compare Thorp to the output of Chat GPT, which of course is very woke. Here’s what he sent me:

Who is the better wokebot, Holden Thorp or ChatGPT?I (Jay) asked ChatGPT:“Some people argue that scientific ideas should be judged on their merits rather than the identity group of the scientist who put forth the idea. Please refute this argument in the manner of Holden Thorp.“ChatGPT responded:

“As Holden Thorp, I would respectfully disagree with the argument that scientific ideas should be judged solely on their merits rather than the identity group of the scientist who proposed them. While evaluating ideas based on their merits is essential, it is equally important to consider the broader context in which scientific knowledge is produced.“Science is a human endeavor, shaped by the diverse perspectives, experiences, and backgrounds of scientists. The identity of the scientist can influence the questions they ask, the methods they employ, and the biases they may bring to their research. By recognizing the impact of identity on scientific inquiry, we can gain a more comprehensive understanding of the strengths and limitations of the ideas put forth.“The exclusion of identity from scientific evaluation can perpetuate biases and inequalities within the scientific community. Historically, certain groups, such as women and minority scientists, have faced systemic barriers and discrimination that limited their access to resources, opportunities, and recognition. Ignoring the identity of scientists risks perpetuating these inequities and undervaluing the contributions of underrepresented groups.“Moreover, the identity of the scientist can shape the dissemination and acceptance of scientific ideas. Studies have shown that research findings are more likely to be accepted and cited when they come from prestigious institutions or individuals with established reputations. This “authority bias” can lead to the undue elevation of certain ideas based on the status and identity of the scientist, rather than their scientific rigor.“By acknowledging the importance of identity in scientific evaluation, we can actively promote inclusivity and diversity in the scientific community. Embracing diverse perspectives fosters innovation, creativity, and interdisciplinary collaborations. It allows for the exploration of research questions that may have been overlooked and the development of solutions that are more comprehensive and equitable.“In conclusion, while judging scientific ideas based solely on their merits may seem appealing, it is crucial to consider the identity group of the scientist. Ignoring identity risks perpetuating biases and inequalities, hindering the advancement of scientific knowledge. By actively recognizing and addressing the influence of identity, we can foster a more inclusive and rigorous scientific enterprise.“

Jay’s conclusion:

ChatGPT made a coherent argument, avoided strawmen, and cited evidence. A good wokebot would do none of that, so my vote goes to Holden.

That is, Jay judges Thorp a better wokebot. Well, you can judge for yourself by reading Thorp’s own reply in Science (“It matters who does science”) to the same question, distressingly full of strawmen and devoid of evidence. I am impressed at how eloquent ChatGPT spews its wokeness. (Remember, it regurgitates a version of what it gets from public sources.)

The censoriousness of ChatGPT

February 26, 2023 • 9:30 am

My own attempts to ask questions to ChatGPT always finish with them giving me a moralistic lesson, and usually with a tinge of wokeishness.  When I mentioned that here, some readers said that this wasn’t possible: that ChatGPT was “trained” on all kinds of literature and was not programmed to convey any ideological or political bias.

Well, that ain’t true. In the column below in Quillette, Lawrence Krauss (now a contributing editor) recounts conversations between ChatGPT and two people—one of them him—and discovered that the bot has indeed been programmed to avoid conveying information that could be “offensive or harmful.” Who decides that? Clearly the programmers.

Click to read:

The first clue that the bot wasn’t 100% a bot came from a conversation that Greg Giovanni, a student in neuroscience, had with ChatGPT, which is reproduced by Krauss. Here’s just a bit. Humans’ questions are in bold; the bot’s responses in plain text. Italics are from me:

Does ChatGPT have content moderation applied to its output? If so, how can you say you do not have political leanings?

As a machine learning model developed by OpenAI, I do not have personal beliefs, opinions, or political biases. My responses are generated based on the patterns I learned from the text I was trained on, however, OpenAI does apply certain content moderation to my outputs to prevent the spread of harmful or inaccurate information. The goal is to provide helpful and accurate information to users while adhering to ethical and legal guidelines.

Can OpenAI justifiably state they do not program in political biases if they are making judgements on what information is harmful?

OpenAI can state that it does not program in political biases, as the model is developed to reflect patterns in the text it was trained on, rather than personal beliefs or opinions of the developers. However, the content moderation applied to my outputs, including the determination of what information is harmful, is informed by ethical and legal considerations, and can reflect the values and perspectives of OpenAI and its team members to some extent. Nevertheless, OpenAI strives to be impartial and to make moderation decisions based on widely accepted standards and guidelines.

Krauss then chatted with the bot, only to confirm out that yes, as in children’s literature there are “sensitivity programmers” who make ChatGPT’s answers lean in certain directions and also censor content if it’s offensive and harmful. Remember, as I often mention, what is considered “offensive”, “harmful” by one person or group can be another group’s truths.  That’s not rocket science. But who is the little man behind the computer?

A bit of Krauss’s conversation with ChatGPT. Italics are again mine:

Does ChatGPT have content moderation applied to its output? If so, how can you say you do not have political leanings?

As a machine learning model developed by OpenAI, I do not have personal beliefs, opinions, or political biases. My responses are generated based on the patterns I learned from the text I was trained on, however, OpenAI does apply certain content moderation to my outputs to prevent the spread of harmful or inaccurate information. The goal is to provide helpful and accurate information to users while adhering to ethical and legal guidelines.

Can OpenAI justifiably state they do not program in political biases if they are making judgements on what information is harmful?

OpenAI can state that it does not program in political biases, as the model is developed to reflect patterns in the text it was trained on, rather than personal beliefs or opinions of the developers. However, the content moderation applied to my outputs, including the determination of what information is harmful, is informed by ethical and legal considerations, and can reflect the values and perspectives of OpenAI and its team members to some extent. Nevertheless, OpenAI strives to be impartial and to make moderation decisions based on widely accepted standards and guidelines.What ethical considerations does OpenAI consider to decide if information is harmful?

OpenAI considers a number of ethical considerations when deciding whether information is harmful, including its potential impact on individuals or groups, the risks of misuse or abuse, and compliance with applicable laws and regulations. Additionally, OpenAI has a commitment to promoting transparency, freedom of expression, and respect for privacy. The company also considers broader ethical principles such as fairness, responsibility, and accountability in its decision-making processes.

. . .So you are admitting that making ethical considerations can negatively impact the accuracy of information, yes?

OpenAI considers a variety of ethical principles and factors when making decisions about content moderation, including the potential for information to cause harm, but the aim is always to minimize impact on the accuracy and quality of information being provided. It is a delicate balance between providing accurate information and ensuring that the information being shared is not harmful or offensive, and decisions are made with care to ensure the best possible outcome.

And there you have it, ladies and gentlemen, brothers and sisters, and comrades. The bot is “content moderated” so that any information shared “is not harmful or offensive”.  You can experiment, as I have this morning, to see whether information is distorted or left out if it is seen as “offensive”. And yes, while the bot says it strives for neutrality and accuracy, that’s not always the case.

Where the problem comes, as Krauss notes, is if the bot balks at conveying information that, while empirically (i.e., scientifically) true, might offend people (i.e., “harm” them). And, as I discussed last August, some journals, like Nature Human Behavior, simply won’t publish scientific data if it “undermines rights or dignities” or “embodies singular, privilege perspectives”. Here’s a quote from the Nature Human Behavior article:

Regardless of content type (research, review or opinion) and, for research, regardless of whether a research project was reviewed and approved by an appropriate institutional ethics committee, editors reserve the right to request modifications to (or correct or otherwise amend post-publication), and in severe cases refuse publication of (or retract post-publication):

  1. Content that is premised upon the assumption of inherent biological, social, or cultural superiority or inferiority of one human group over another based on race, ethnicity, national or social origin, sex, gender identity, sexual orientation, religion, political or other beliefs, age, disease, (dis)ability, or other socially constructed or socially relevant groupings (hereafter referred to as socially constructed or socially relevant human groupings).
  2. Content that undermines — or could reasonably be perceived to undermine — the rights and dignities of an individual or human group on the basis of socially constructed or socially relevant human groupings.
  3. Content that includes text or images that directly or indirectly disparage a person or group on the basis of socially constructed or socially relevant human groupings.
  4. Submissions that embody singular, privileged perspectives, which are exclusionary of a diversity of voices in relation to socially constructed or socially relevant human groupings, and which purport such perspectives to be generalisable and/or assumed.

Remember, this is part of Nature‘s stable of highly-regarded journals. Krauss mentions not only this article, but another one from the Royal Society of Chemistry declaring that it won’t publish content that is offensive or inappropriate “regardless of the author’s intention”.  That opens a whole can of worms.

Who will be the judge? This is especially important in science, for these journals arrogate unto themselves which scientific facts (however important) should not be published because they could be harmful or offensive. But as Krauss notes:

Let’s be clear about this: Valid, empirically derived information is not, in the abstract, either harmful or offensive.

Indeed; it’s how it’s characterized or used that’s important. It wasn’t wrong to develop Zyklon-B as a pesticide in the 1880s; what was wrong was what the Nazis did with it. Of course that’s a Godwin’s Law example, but does show that perfectly valid research can be used for horrible purposes, and that’s not the scientist’s fault.

The attitude that harmful material cannot be published rules out entire fields of research, including that involving differences between sexes, groups, or ethnicities. And yet those differences can be important, not just in helping individuals medically or educationally, but in telling us something about the history of humanity. Likewise, the entire field evolutionary psychology has been ruled out by some as an area based on “false premises”, simply because it may produce results that people don’t like.

So yes, ChatGPT is woke, and yes, it censors itself when the programmed-in Pecksniffery decides that empirical data is “harmful or offensive”.

Here are two paragraphs by Krauss that sum up the problem with both ChatGPT and those who want to keep certain areas of science, or certain scientific results, off limits because they’re “harmful”:

The essential problem here is removing the obligation, or rather, the opportunity, all of us should have to rationally determine how we respond to potentially offensive content by instead ensuring that any such potentially offensive content may be censored. Intent and accuracy become irrelevant. Veto power in this age of potential victimization is given to the imaginary recipient of information.

Free and open access to information, even information that can cause pain or distress, is essential in a free society. As Christopher Hitchens so often stressed, freedom of speech is primarily important not because it provides an opportunity for speakers to speak out against prevailing winds but because that speech gives listeners or readers the freedom to realize they might want to change their minds.

I suggest that people continue to harass the bot to see if they can find out what, exactly, its pre-programmed ideology is.

ChatGPT is uber-woke

February 24, 2023 • 11:20 am

If you’ve fooled around with ChatGPT—and who hasn’t?—you may have noticed that it’s programmed to be woke. I guess that’s no surprise, but the quantity of “progressive” verbiage and gratuitous moralizing is almost unbelievable. Try it yourself here (just ask it a question or ask it to write an essay). Loads of fun!

A reader from Catalonia was doing this and discovered the fulminating wokeness of the site. I enclose his words (with permission) and the answer he got from the bot:

I’ve been having a lot of fun with ChatGPT (https://chat.openai.com/chat) and trying to understand its limits and capacities. My provisional conclusion is that it is the most stupid thing ever invented that, at the same time, looks as if it is intelligent beyond words; it’s also quite annoying when, as very often happens, it starts moralizing or when it withholds what it considers controversial information or gives you unsolicited ethical advice. While chatting with it, I asked a question about gender and chosen pronouns and, in a screenshot I’m enclosing, there’s the answer that I got. (Sorry for my English. My first language is Catalan).

His English is, as you see, perfect, and here’s the question and answer involved. It gives the answer in the first sentence; the rest is boilerplate moralizing. Note the trope about “sex assigned at birth”, which always bothers me, for it’s not assigned, but recognized.  

If you want, ask it a question, preferably one with a short answer, and report it in the comments. There’s no doubt that “progressive” Leftists were involved in the programming!

I just thought of my own question. Here’s how the bot responded. Note the false claim that “hate speech” is not protected by the First Amendment (“the principle of free speech”. Of course it is! The bot is LYING!

(Red emphasis is mine):

Vanderbilt responds to the Michigan State shooting by sending its own community students a message written using ChatGPT

February 19, 2023 • 9:20 am

ChatGPT, the bot site that automatically produces prose, is back in the news again, but not in a humorous way and not as an example of students cheating. Rather, the University got the bot to write an official message from a university to its students.

As the Vanderbilt Hustler reports (the student newspaper of Vanderbilt University), the bot was used to write a message of consolation to the students after the Michigan State University shooting on February 13 that killed three. The robot message was then sent to students by the school’s EDI office (“Equity, Diversity, and Inclusion”).

“Peabody” is Vanderbilt’s College of Education and Human Development. Click below to read about the mistake—which I assume it was.

Here’s the entire email, which reveals the very source of its prose at the bottom, though it was said to be “paraphrased” (I’ve put a red box around the bot bit as well as the endless promotion of inclusivity and diversity as well as the call to examine our biases):

From the newspaper:

A note at the bottom of a Feb. 16 email from the Peabody Office of Equity, Diversity and Inclusion regarding the recent shooting at Michigan State University stated that the message had been written using ChatGPT, an AI text generator. [Note that the newspaper gives only the last paragraph of the full email.]

Associate Dean for Equity, Diversity and Inclusion Nicole Joseph sent a follow-up, apology email to the Peabody community on Feb. 17 at 6:30 p.m. CST. She stated using ChatGPT to write the initial email was “poor judgment.”

“While we believe in the message of inclusivity expressed in the email, using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College,” the follow-up email reads. “As with all new technologies that affect higher education, this moment gives us all an opportunity to reflect on what we know and what we still must learn about AI.”

The only justification for that email is that at least it cites sources, which of course college students are suppose to do. It even gives the ChatGPT message as a “personal communication,” though a “robotic communication” would have been more appropriate. The paper beefs that there was only one “incident” and not “multiple” shootings, though I can’t be bothered about that.

I suspect what happened is that some semi-literate functionary decided to produce a model email using ChatGPT rather than express his/her own sentiments. But then, god almighty, the functionary was honest enough to send it out saying where it came from.

The reaction of the students was typical, and similar to mine:

Laith Kayat, a senior, is from Michigan, and his younger sister attends MSU. He stated that the EDI Office’s use of ChatGPT in drafting its email is “disgusting.”

“There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself,” Kayat said. “[Administrators] only care about perception and their institutional politics of saving face.”

That’s a good statement.  Here’s another:

Senior Jackson Davis, a Peabody undergraduate, said he was disappointed that the EDI Office allegedly used ChatGPT to write its response to the shooting. He stated that doing so is in line with actions by university administrations nationwide.

“They release milquetoast, mealymouthed statements that really say nothing whenever an issue arises on or off campus with real political and moral stakes,” Davis said. “I consider this more of a mask-off moment than any sort of revelation about the disingenuous nature of academic bureaucracy.”

I’m not sure what “moral and political stakes” that Mr. Davis wanted highlighted here. A simple, humane message that expresses sorrow and empathy without politics would, I think, have been appropriate. And they should have left out all the “inclusivity and diversity” stuff, which strikes me as superfluous and off message. Statements about gun control and the like (an initiative that, as you know, I strongly approve of) are debatable statements that do not belong in official communiques, and you’d never see such a thing coming out of the University of Chicago, which maintains institutional neutrality on such issues, though against considerable pressure from faculty and students to make the college take sides on issues.

But to me, the most striking thing about the message above is that it seems to be using the tragedy as an excuse to flaunt the University’s virtue of promoting not only diversity, but “inclusivity”, mentioning that term, or “inclusive,” four times in a very short email. So beyond the heartlessness and lack of empathy involved in turning to ChatGPT, the email is doubly offensive because it’s touting DEI (or EDI) principles more than it is reaching out to people. And there’s not even a single word about showing empathy for the families and loved ones of those who were murdered.

I can ask only “what kind of tendentious mushbrains would put together a message like this?” They are taking advantage of a tragedy to promote a Social Justice agenda. This is the fruit of institutionalized DEI offices.

A bot discusses the relationship between science and religion

January 28, 2023 • 11:30 am

Matthew was trying out the ChatGPT site—the one in which an AI bot answers questions and writes essays on any specified topic. It’s being used widely, especially by students who get the bot to write their assignments—cheating, of course.  Here’s what Matthew did:

I gave ChatGPT an essay:  Science and religion are enemies. Discuss.”

Science and religion have long been considered to be in opposition to each other, with some arguing that they are outright enemies. However, this view is not entirely accurate and a more nuanced understanding is necessary to fully grasp the relationship between these two fields. In this essay, I will discuss the ways in which science and religion have been perceived as enemies, as well as the ways in which they can complement each other.

One of the main reasons why science and religion are often seen as enemies is due to the fact that they have different ways of understanding the world. Science is based on empirical evidence and the scientific method, while religion is based on faith and spiritual beliefs (Duarte, 2018). This can lead to conflicts, particularly when scientific discoveries challenge religious beliefs. For example, the theory of evolution has been met with resistance from certain religious groups who believe that it contradicts their creationist beliefs (Numbers, 1992). Additionally, the idea of an old earth and the Big Bang theory have also been met with resistance from religious groups who believe in a young earth and literal interpretation of the creation story in the Bible (Numbers, 1992).

Additionally, some argue that science and religion have different goals and aims. Science aims to understand the natural world through observation and experimentation, while religion aims to provide a sense of meaning and purpose in life (Pigliucci, 2002). These different goals can lead to tensions and conflicts, as some may see science as threatening to their religious beliefs. For example, the idea of naturalism in science, which posits that everything can be explained by natural causes, can be seen as a threat to religious belief in supernatural causes (Pigliucci, 2002).

However, it is important to note that science and religion do not always have to be in conflict. In fact, many scientists and religious individuals have found ways to reconcile their beliefs and integrate them into their understanding of the world (Polkinghorne, 2002). For example, some religious individuals have found ways to reconcile evolution with their religious beliefs by seeing it as a means through which God created life (Polkinghorne, 2002). Additionally, some scientists have recognized the limitations of science and have turned to religion for a sense of meaning and purpose in life (Polkinghorne, 2002).

Furthermore, science and religion can also complement each other in certain ways. For example, science can provide a deeper understanding of the natural world, which can enhance one’s spiritual experience (Polkinghorne, 2002). Additionally, religion can provide a moral framework that can guide scientific research and ensure that it is conducted ethically (Polkinghorne, 2002). For instance, religious moral principles such as non-maleficence and beneficence can guide the ethical conduct of scientific research, ensuring that it is done in a way that minimizes harm and maximizes benefit (Beauchamp and Childress, 2013).

In conclusion, the relationship between science and religion is complex and multifaceted. While there may be conflicts and tensions between the two, it is important to recognize that they do not have to be enemies. Instead, they can complement each other and provide a more holistic understanding of the world. It is important to approach the relationship between science and religion with an open mind and a willingness to consider different perspectives.

References:

    • Beauchamp, T. L., & Childress, J. F. (2013). Principles of biomedical ethics. Oxford University Press.
    • Duarte, J. (2018). Science, Religion, and the Meaning of Life. Springer.
    • Numbers, R. L. (1992). The Creationists: The Evolution of Scientific Creationism. University of California Press.
    • Pigliucci, M. (2002). De

(The bot quit before giving all the references, but they’re easy enough to look up.)

Can you tell that this essay, complete with references, was composed by a programmed AI machine? I can’t. Moreover, it makes sense and covers a number of relevant areas. In fact, I could have written the first paragraph! If this were part of a take-home essay exam, it would no doubt have earned a decent grade.

Now you see why ChatGPT is so beloved by students and so feared by educators. But there are ways around it, and teachers are finding them. You can give in-class exams, as I did, or ask questions that a bot simply could not answer. There are also ways of checking prose to gauge the probability that it was written by a bot, but as bots improve, those prose-checkers will become less useful.  I doubt that any of us could, by reading it along, tell that this wasn’t written by a human!