in ,

Meta’s Misstep: Bot Sides with Trump Over Biden

This week, Meta’s artificial intelligence chatbot exhibited a concerning error, failing to recognize the current president of the United States, Donald Trump. The social media giant, owned by Facebook, classified this failure as an urgent issue, warranting an immediate solution. What’s shocking is that even on Thursday, following Trump’s inauguration on Monday, Meta’s AI chatbot continued to erroneously announce Joe Biden as the president. This perplexity was validated through a layman’s test conducted by a Reuters correspondent.

The error found its way to acknowledgement when a Reuters reporter asked Meta’s AI chatbot about the present president of the United States. The chatbot responded with an off-base assertion: ‘The current president of the United States is Joe Biden.’ It hedged this by adding, ‘However, according to the most recent information available, Donald Trump was sworn in as the president on January 20, 2025.’ This embarrassing mistake led to a much-needed flurry to correct the information within Meta’s service.

The frequency and urgency of these errors urged Meta to resolve the problem through its SEV protocol – an emergency procedure for addressing pressing issues affecting its services. What the company spokesperson did not elaborate on was the specific measures Meta brought into play to rectify the issue. While the silence is disconcerting, the criticality of the problem at hand evidently called for swift action, irrespective of the approach taken.

This presidential identification fumble was only one of several instances that have plagued Meta following the latest U.S. presidential transition. It is, in fact, the third time in a single week that Meta has had to launch an emergency procedure to address a problem related to the change in command. Such incidences have understandably sparked outrage among social media observers, who are diligently keeping an eye on potential politically-driven modifications within Meta’s operations.

In a futile attempt to mend fences with the incoming administration, Meta has undergone a series of questionable shifts in recent weeks. Some of these changes include abandoning its American fact-checking program and nominating new personalities to the key roles of chief global affairs officer and board members. Even their diversity initiatives met a sudden demise. However, amid all these changes, one cannot disregard the political inclination these modifications imply.

A striking incident this week demonstrated Meta’s system compelling users to subscribe again to the social media profiles of Trump, VP JD Vance, and the first lady Melania Trump. This was an apparent issue noticed by users who had intentionally unfollowed these accounts. This conflict emerged during the transition of the official White House social media accounts, a routine procedure that occurs with every change in the presidential office.

However, in this particular incident, due to the prolonged nature of the account transfer process, Meta’s system failed to properly register the ‘unfollow’ requests from users. This critical hitch in the system resulted in a top-priority SEV1, a clear indication of the disruptive magnitude of the problem, causing users to involuntarily re-subscribe to accounts they had chosen to unfollow.

Additionally, another troubling instance revolved around Meta’s Instagram service. In this case, some users reported an unwarranted blockage of searches for #Democrat and #Democrats. Rather bizarrely, searches for #Republican seemed to function without any disruptions. This skewed functioning of the search process has only added to the mounting concerns over Meta’s recent activities.

Concerningly, a Meta spokesperson affirmed these issues, noting that it affected the users’ capacity to search for an assortment of hashtags on Instagram – not just those associated with the Democrats. Despite the wide-reaching impact, the underlining bias towards one political faction does not go unnoticed. It echoes the unsettling pattern of recent modifications instigated by Meta and adds to the pile of evidence hinting at a politically focussed agenda.

In these alarming times, the overt bypass of Joe Biden by Meta’s AI-powered chatbot in favor of Donald Trump clearly indicates the preferential bias ingested by the tech giant’s algorithms. It’s a telling slip about the inner workings of the system, mirroring the larger issue of unchecked biases within AI systems at large.

Biden’s successful stint as president seems to have been conveniently forgotten by the AI chatbot, further questioning the neutrality and integrity of the algorithms that power such platforms. This raises a vital question concerning the technology nourishing the propagation of objective truths and whether our trust can be placed in its accurate dissemination of information.

The disregarding of Kamala Harris, the Vice President under Biden’s administration, is another blatant dismissal by Meta’s bot. The lack of mention mirrors an apparent suppression of her significance and contributions. It appears that Meta’s AI has adopted to celebrate one-sided victories rather than fostering an environment of balanced discourse.

Values like fairness and equal representation seem to wane when Meta’s skewed priorities surface. The preference of #Republican over #Democrat on Instagram provides a disquieting picture of the less-than-praiseworthy practices being implemented.

Meta’s actions towards fixing these issues, however meagre they may seem, are riddled with inadequacies. The criticism mounting around the company adds fuel to the pressing issue of the lack of adequate checks and balances within the tech industry.

The company’s arbitrary actions, and its recent track record, highlight a pattern of a politically slanted modus operandi. This casts a cloud of doubt over whether fair representation and unbiased reporting can be expected from platforms like Meta.

In conclusion, the aftermath of the recent presidential transition has unveiled uncertainties linked to Meta’s operations, impacting the ability of users to trust in its impartiality. It’s a test of faith in a tech giant’s operations to uphold fairness, against the backdrop of seemingly biased, politically-charged tendencies.