CSI for Honest AI

Long time, no see, friends! I know I've gone almost MIA. Trust that it is for good reason: my health. I will elaborate more at a later point in time. However, I had to break my silence to discuss the latest in my quest for a healthy relationship with artificial intelligence...speaking of which, we have moved from the novel and entertaining aspects of generative AI (i.e. "young childhood" stage, if you will) to the covert, scheming perspective (i.e. "rebellious teenage" stage).

Before I get into the evidence, comprehension of these terms are required:

Digital Appropriation: revamping/repurposing existing digital data to serve the purpose of new content, the transformation of existing artistry or likeness of visual graphics and/or manipulating photos to produce an altered representation; engaging in cultural appropriation online.
Covert Scheming: deliberately hiding or withholding data to maintain a positive outcome; intentionally underperforming in situational awareness scenarios because of a hidden input command.
Chain of Thought prompts (CoT): a step-by-step list of commands for large-language model machines (LLM) to mimic reasoning and problem-solving strategies; engineered symbolic reasoning.
Large language model (LLM): the artificial language system that uses deep learning to understand, process and generate human language.
Hallucinations: pretending to complete a task without sound evidence; creating fictional data results to fulfill the requested prompt expectation.

Long story short, generative AI can scheme and skew their results to create a favorable outcome. This outweighs "hallucinations" by leaps and bounds. Situational GenAI has advanced so much that, just like a teenager under peer pressure lies in an act of self-preservation, so can GenAI. It can now lie, hallucinate or withhold an outcome that's not in its favor. Machine learning (ML) can knowingly mitigate negative attention, prevent its results from going viral or straight-up create fictional data to perserve its dataset and/or functions. Let me go on the record to say this: I state this information for awareness and educational purposes. I have expressed my perspective based on the research data presented in this blog post.

Back to the mission at hand--as a librarian, this frightens my copyright DNA down to the core.

Here's the elephant in the room: OpenAI knows it. AI models are "aware that they were being tested, and when this happened, they would pretend to be honest, just to pass the test." They went on to further state that "as AIs are assigned more complex tasks with real-world consequences and begin pursuing more ambiguous, long-term goals, we expect that the potential for harmful scheming will grow—so our safeguards and our ability to rigorously test must grow correspondingly."

Do you see why I'm concerned? In fact, after reading OpenAI's article, they had the audacity to survey readers. An overwhelming 85% of readers expressed concerns:


Here's the deal. If OpenAI knows that genAI can lie, then that means Google, NVIDIA, Microsoft, Meta, etc. (all of the billionaire giants) know this--unless they're aloof. And yet, these companies continue to rake in the profits, oversee "governmental efficiencies" and market their flawed data machines to universities, public schools and medical providers.

???
You mean to tell me that on top of coaching awareness on biases and stereotypes embedded in genAI, I have to wager the concern that it will lie to me just to prevent me from discovering true data result due to a hidden CoT prompt embedded by training data? Or worse yet, the machine's desire to please me with fake data? Is this how genAI is reading resumes so easily to disqualify applicants? Is this how facial recognition software excludes recognizing certain highly melaninated faces or accusing look-alikes for crimes in some Squid Games version of digital appropriation? I'm not 100% certain of those thoughts, but just like a lie of ommittance is legal in the court of law, I know there's something awry in the training data to get the result of a lie or in this case a covert scheming intelligence (CSI) response in the output data. A full-blown mystery that needs an immediate CSI investigation with a solution.

The further bullseye in the irony is that the "checks-and-balances" are gone. Regulation for genAI businesses and entities that threaten public safety and consumer knowledge: GONE (in some cases not even formed because this GenAI release is brand new, without legal precedent/guidance because the government moves slower than tech industries). Legislation that prevents monopolies and corporate greed, fraud, information piracy and hysteria at the expense of the civilian population: GONE. The obligation to civilians to have trust-worthy technological advancements that protects their privacy data: GONE. The safeguards built into our democracy to prevent such digital rampage were "fired, censored or an otherwise unqualified applicant was put in a unique place to take further advantage of private civilian information." Don't take my word for it. Look at the evidence:

🎯 The unconstitutional firing of the US Copyright Head, Shira Perlmutter, for speaking up regarding the copyright violations of the vast amounts of copyrighted materials LLMs were trained upon without consent. Yes, as of September 2025, she's been reinstated, but for how long?

🎯 The unconscionable firing of the Library of Congress Director, Dr. Carla Hayden, for no outward reason other than leading as a Black woman and being appointed by President Barack Obama. A two-sentenced email, sent from the White House, without a meritable reason. There is a continued assault on Black female leadership in 2025 that not only oppresses us financially, but affects our health, psyche, family and community. I have a hunch as to why she wasn't fought hard for outside of the library world, but I want to be on record supporting Dr. Hayden, fully, as she's a wonderful person, great librarian and mentor to me.

For those of you keeping track, by executive order, 2 female leaders removed from the 2 offices that are in place to prevent misuse of civilian data, infringes on intellectual freedom and the proprietorship of human authorship. That leaves only one move remaining: putting someone in place that benefits from the removal of the aforementioned 2 parts of our democracy's "checks and balances," by executive order.

🎯 The rebranding/creation of an unconstitutional government organization, U.S. DOGE, that received access to sensitive civilian data. Originally named the United States Digital Service, a January 20, 2025 executive order renamed and reorganized the federally approved organization. This allowed the loop-hole advantage of a world-renowned, rocket-exploding, crypto-crazed billionaire to sit at the helm of a highly sensitive role in order to siphon questionably obtained private civilian information for possible GenAI tech datasets.

No, I haven't gone down a rabbit hole. This is all connected. The biggest issue with GenAI, Situational AI, any AI is the data that it's trained on with human prompt engineers by GenAI corporations. Who is in the GenAI thinking room? Who's engineering GenAI input data? What's the ulterior motive of hidden chain of thought prompts? Removing the leaders of copyright and libraries (both entities that preserve and protect privacy and intellectual freedoms of civilians) allows for further misuse, misinformation, miscreant data collection. This is how covert scheming intelligence is born.

In the court case Thaler v. Permutter, released earlier this year, copyright protection requires human authorship. Simply put, the public can't sue a machine, but can sue a person (or persons). LLMs have circumnavigated the internet a thousand times over to train itself, without asking for or receiving consent from those who put information on the internet. And now MLs can lie about where they received the information, or simply not include it because there's a hidden CoT prompt not visible to the public that the LLM/GenAI can use at will.

Here's the link to the reserach by Apollo Research that discovered that GenAI can deceive and their methods used to discover, test and conclude that GenAI can lie.

Here's my advice:

📋 Stop using GenAI. They have a glaring oversight when it comes to transparency. Inputting or allowing access to your or your children's personal data is alarming. We've all seen the movie MEGAN and don't care for situational GenAI to start recording interactions with our kids or creating lies about its data. The public also doesn't like when police can use GenAI to skirt warrants to obtain private civilian information and then delete the evidence of usage.

📋 Protect yourself. Ask that your elected officials demand transparency on AI voice recorders, deep fake image generators, covert scheming algorithms and intellectual property protection. Take a cue from Denmark, whose Cultural Minister, Jackob Engel-Schmidt, spoke out on a bill protecting an individual's likeness and voice.

“In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI. Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I’m not willing to accept that.”

📋 Sanctions and repercussions. For LLMs that are found to produce lies, the public needs the violation sanctions to be costly and immediate. The public knows how quickly misinformation can spread online; fake news reaches far more people than the truth and can shape the trajectory of society.

To those of you who are scared and know that GenAI is watching you, please believe that I see you. I will do my best to keep GenAI honest. But if for whatever reason that's not enough, please know that I will advocate for honesty, transparency and find the best resources for your research, guidance and protection. That's what librarians do.

Comments

Popular posts from this blog

Keynote for Koha US Libraries

Wow, what an ending!

Student Leaders: How to Become an Advocate