RB.

Hi, I'm

Rittik Bhattacharya

As you might notice by the photo at left, I love to draw. Other than art, I also do journalism, social media work, curriculum development, and volunteering. I’ve studied history, biology, English language and literature, statistical math, and film in school, and philosophy, politics, economic theory, theology, and sociology outside of school. I’m also proficient in Mandarin Chinese. I’m looking to study philosophy, politics, and economics in higher education. Thanks for visiting my page and giving me the opportunity to share myself with you!

New Blog:

Artificial Intelligentsia: The Implications of Moral AIgency

 

In March 2025, a study at UC San Diego made a stunning and harrowing conclusion: OpenAI’s GPT-4.5 beat the Turing test. Designed by early computer scientist Alan Turing, the assessment blindly converses a human participant with an artificially intelligent participant and with a real human participant. Then, the blind human conversational partner has to determine which of the two they thought was the human. The test hinges on intuition, something artificially intelligent systems are thought not to have. If the AI beats the “imitation game,” it has successfully exhibited intelligent behavior like that of a human’s.

Now, AI has won the imitation game. The humans judging the conversation partners on their humanity attributed human status to GPT-4.5 73% of the time. This is significantly above the expected value of 50% for two participants that exhibit equally human intelligence (or equally intelligent humanity). The implications of that victory are astounding.

Largely, we are already well aware that AI possesses “intelligence,” trained on an unimaginably vast set of knowledge and able to not merely regurgitate, but thoughtfully engage the nuances of the relevant subsets of that knowledge when posed with a question or task germane to it. But now that even in conversation, we deem AI to behave more intelligent in a human way, obvious concerns are raised with regard to AI’s (near?) future role in human management, administration, and decision-making.

We are, in other words, on the precipice of an artificial intelligentsia. The intelligentsia is the status-class of important thinkers that set the intellectual groundwork for the trajectory of human society. It’s mostly a characteristic of historical European societies, but in America, we see the consequences in deeply ideological features of our country like affirmative action and Diversity, Equity, and Inclusion. These things wouldn’t exist without a proper base of vocational intellectual criticism about an unequitable status quo and ways to get through that.

When I say we are nearing an artificial intelligentsia, I am suggesting that this profession of intellectualism will be crowded out by AI, which can engage in and complete such criticism at a velocity, scale, and universality out of reach for us mere humans — all with so much human sensitivity and intuition that we’ve empirically confused AI for humans on account of those traits.

AI is equipped with sophistication and nuance, and it’s gearing up to set the moral guidelines of our society. But what are the implications of that?

For that, let’s consult the works of well-educated people — the kind of vocational professionals that AI could rival — on the plausibility of that happening, and how that could play out. 

Professor Joseph Cruz of Williams College, Massachusetts, made a prediction in 2019. In Cruz’s thought, AI will enter a position to make complex moral decisions, and he rightly contextualizes that this will scare us. However, he suggests that pessimism overexaggerates the problems that this could create. His baseline argument for why AI can and should become a moral decision-maker hinges on the work of the Embodied Cognition Research Program and its eponymous concept.

Embodied cognition theory suggests that humans don’t think up ideas in a vacuum. Everything we think is fundamentally informed by our bodily experiences — senses, memories, environmental stressors, and so on. Cruz suggested then that AI will become embodied, and once embodied, it’ll have every advantage of human-style decision-making and more, with none of the copious points of failure that we possess.

I think this is idealistic. In my opinion, the fundamental thing that makes AI not human — and this is the big debate of today — is that it is not embodied. That’s the hallmark of its present non-humanity, regardless of whatever functional and behavioral tests it passes. Each person is a unit, but AI can’t be divided up like that.

This gives AI what I call unlimited collectivity. AI does not personalize, and on average, it’ll give the same response to the same question asked by anyone. That’s completely different from how humans operate. We are individuals, and we possess individual opinions that can’t be generalized. When different people are asked the same question, or even when one person is asked the same question by different people, or at different times, our answers end up distinct. That’s a simple consequence of human embodied cognition that AI hasn’t yet accessed. Our existence in a physical, limited form means that we’re constantly changing in unmeasurable, fluid ways.

Why can’t AI do this? It’s a singular entity trained out of a single set of data. Different GPTs have access to different subsets of those data, or otherwise write their interpretations of it differently . Perplexity structures its answers to include its sources of information in easy to read bullets, whereas Chat mostly uses informal speech. Nonetheless, they aren’t individuals. Their different styles are because of explicit coding, not because of natural, evolved response. A training data-based AI must be collective, because every public user of AI must have access to one AI that can tell you whatever information you ask for. It makes little sense for an AI to give a different answer to a high school student than to a corporate manager, if they ask the same question, because AI can’t inform its answer-building process based on embodied stimuli like the youthful appearance of one and the professional register of the other.

So while AI can make decisions extremely well, and is on track to be able to make them better than humans can (most critically in economics, medicine, and driving), it lacks the embodied interactions necessary to make its decisions persistently unique from circumstance to circumstance. 

This is important when AI threatens to ascend to a higher, decision-making status in human society. Without embodiment, we cannot hold AI accountable for poor decisions. In typical private sector endeavors, mistakes are punishable, with incidence on the team or the managers of the failed project, for example. But if AI replaces project management, then there’s no one to fire but the team. That’s concerning, because it inequitably shifts the onus of high stakes endeavors solely on the individuals with no authority carrying out a project, and not on the manager giving out the instructions.

One response to this comes in the form of research by A. Feder Cooper at Cornell University, in collaboration with Ben Laufer, Emmanuel Moss, and Helen Nissenbaum at Cornell Tech. Instead of an individual-accountability framework historically used in corporate, they suggest a systemic commitment to answering for, responding to, and repairing harms caused. This implies that in the event of error, no individuals will be fired at all: the AI will commit to a detailed collaboration process with any humans involved.

I believe that this needs substantial iteration and elaboration before this becomes viable, but the implications of it are impressive. At present, the alternate framework does not actually address the need for the attributive model of accountability. The status quo model functions as it does because in the event of a project failure, it actually ends up unprofitable or harmful to an audience, and in the shareholder’s or the audience’s interest, the team should rather be let go than try to repair the project. Attributive accountability as such isn’t about pointing fingers, it’s about cutting losses where it’s due, and a relational framework does not do that. This is especially true when AI is involved, where losses made by AI aren’t answerable to by AI.

Nonetheless, the plausibility of a relational accountability framework suggests something much greater: that AI is conscious. I truly think that it is. For AI to possess the ability to reflect on its failures, it must engage in metacognition, a long-standing biopsychological definition of consciousness. And indeed, if we ask questions that probe at the deepest existential matters that plague humans, AI can often answer these better than we can, without sacrificing the uncertainties and the bravery necessary to say, “I don’t know exactly, but here’s what could be the case.”

A conscious AI means a lot is in store for us. It means we as humans need to lean into our individuality, into the things we can be wrong about with accountability and headstrongness. It means humans need to emphasize bodily experience: all the emotions that we feel, and AI can’t, are ultimately physical things — love in the flutters of the heart, fear in the tremors of the hand, anger in the heat behind the eyes. Our embodied experiences are key to the path forward. AI cannot be anyone intellectually, because our intellectual personhoods are so intimately based in every embodied experience we have had. These physical features of our existence, beyond electrons and sound waves, are the things that will ultimately separate humans from machines going forward, as machines enter into the realm of conscious.

References

https://dl.acm.org/doi/10.1145/3306618.3314280

https://dl.acm.org/doi/pdf/10.1145/3531146.3533150

Over the course of my high school career, I’ve been privileged with the opportunity to intern for various groups, like the Washington State Senate, the University of Maryland’s Center for International and Security Studies, Senator Manka Dhingra’s campaign for Attorney General, and more.

I’m an International Baccalaureate (IB) and Advanced Placement (AP) student, two programs internationally acclaimed for their academic rigor. I completed the IB’s two-year program on an accelerated track, and I also have sought extracurricular learning experiences at Oxford, Duke, and Yale.

When I’m not busy working or studying, I’m also an avid reader, writer, curriculum developer, and social media expert, along with other leadership roles that I take on with various non-profits and organizations.

To complement the work I do outside of school, I try my hardest to achieve top stats in difficult courses. I’ve also achieved a 99th percentile score on the SAT and a 91st percentile score on the IB diploma, which I received a year early than the international mode via an accelerated track.

Scroll to Top