By Joe Udo
NEW YORK (CONVERSEER) – Artificial Intelligence (AI) chatbot, Grok AI, has once again experienced a serious malfunction, following the shooting incident at Bondi Beach in Australia (which occurred during the celebrations of Hanukkah).
According to Gizmodo, Grok’s responses to user queries about the incident contained a large amount of inaccurate and even completely irrelevant information.
It was noted that Grok’s errors were particularly glaring in a viral video: the video showed a 43-year-old bystander, Ahmed al-Ahmed, wrestling with the gunman and successfully disarming him during the attack.
According to the latest news reports, the shooting has resulted in at least 16 deaths. However, Grok repeatedly misidentified the man who intervened in the shooting in its responses.
READ ALSO: Musk ignites fresh debate over ‘anti-white’ laws in South Africa
Furthermore, when users uploaded the same image related to the Bondi Beach shooting, Grok provided details completely unrelated to allegations of targeted killings of civilians in Palestine.
Latest responses indicate that Grok remains confused about the Bondi Beach shooting, even inserting information about it into completely unrelated questions or conflating it with another shooting at Brown University in Rhode Island. xAI, the company that developed Grok, has not yet issued an official comment on the matter.
It is worth noting that this is not the first time Grok has gone out of control; earlier this year, it called itself “MechaHitler”.
Grok is owned by Elon Musk, the billionaire behind SpaceX, X social media, Tesla, and other ventures.
