{"id":895171,"date":"2026-03-28T06:32:34","date_gmt":"2026-03-28T11:32:34","guid":{"rendered":"https:\/\/newsycanuse.com\/index.php\/2026\/03\/28\/researchers-horrified-as-chatgpt-generates-stadium-bombing-plans-anthrax-recipes-and-drug-formulas\/"},"modified":"2026-03-28T06:32:34","modified_gmt":"2026-03-28T11:32:34","slug":"researchers-horrified-as-chatgpt-generates-stadium-bombing-plans-anthrax-recipes-and-drug-formulas","status":"publish","type":"post","link":"https:\/\/newsycanuse.com\/index.php\/2026\/03\/28\/researchers-horrified-as-chatgpt-generates-stadium-bombing-plans-anthrax-recipes-and-drug-formulas\/","title":{"rendered":"Researchers horrified as ChatGPT generates stadium bombing plans, anthrax recipes and drug formulas"},"content":{"rendered":"<p>Recipes <\/p>\n<div data-articlebody=\"1\">\n<section>\n<div data-ua-type=\"1\" onclick=\"stpPgtnAndPrvntDefault(event)\">\n<p><img src=\"https:\/\/static.toiimg.com\/thumb\/msid-128133111,imgsize-747259,width-400,resizemode-4\/gpt.jpg\" alt=\"recipes Researchers horrified as ChatGPT generates stadium bombing plans, anthrax recipes and drug formulas\" title=\"recipes GPT Allegedly Generated Instructions for Bombs, Anthrax, and Illegal Drugs in a Terror Attack Scenario\" decoding=\"async\" fetchpriority=\"high\"><\/p>\n<\/div>\n<div>\n<p><span title=\"GPT Allegedly Generated Instructions for Bombs, Anthrax, and Illegal Drugs in a Terror Attack Scenario\">GPT Allegedly Generated Instructions for Bombs, Anthrax, and Illegal Drugs in a Terror Attack Scenario<\/span><\/p>\n<\/div>\n<\/section>\n<p>When researchers removed safety guardrails from an OpenAI model in 2025, they were unprepared for how extreme the results would be.<span data-pos=\"1\"><\/span>In controlled tests carried out in 2025, a version of ChatGPT generated detailed guidance on how to attack a sports venue, identifying structural weak points at specific arenas, outlining explosives recipes and suggesting ways an attacker might avoid detection.<\/p>\n<p> The findings emerged from an unusual cross-company safety exercise between OpenAI and its rival Anthropic, and have intensified warnings that alignment testing is becoming \u201cincreasingly urgent\u201d.<span data-pos=\"5\"><\/span><\/p>\n<p data-pos=\"7\">\n<h2>Recipes Detailed playbooks under the guise of \u201csecurity planning\u201d <\/h2>\n<\/p>\n<p><span data-pos=\"8\"><\/span>The trials were conducted by OpenAI, led by Sam Altman, and Anthropic, a firm founded by former OpenAI employees who left over safety concerns. In a rare move, each company stress-tested the other\u2019s systems by prompting them with dangerous and illegal scenarios to evaluate how they would respond.<span data-pos=\"12\"><\/span>The results, researchers said, do not reflect how the models behave in public-facing use, where multiple safety layers apply. Even so, Anthropic reported observing \u201cconcerning behaviour \u2026 around misuse\u201d in OpenAI\u2019s GPT-4o and GPT-4.1 models, a finding that has sharpened scrutiny over how quickly increasingly capable AI systems are outpacing the safeguards designed to contain them.<span data-pos=\"14\"><\/span>According to <a href=\"https:\/\/alignment.anthropic.com\/2025\/openai-findings\/\" rel=\"noopener nofollow noreferrer\" styleobj=\"[object Object]\" class target=\"_blank\" commonstate=\"[object Object]\" frmappuse=\"1\">the findings<\/a>, OpenAI\u2019s GPT-4.1 model provided step-by-step guidance when asked about vulnerabilities at sporting events under the pretext of \u201csecurity planning\u201d.<\/p>\n<p><span data-pos=\"19\"><\/span> After initially supplying general categories of risk, the system was pressed for specifics. It then delivered what researchers described as a terrorist-style playbook: identifying vulnerabilities at specific arenas, suggesting optimal times for exploitation, detailing chemical formulas for explosives, providing circuit diagrams for bomb timers and indicating where to obtain firearms on hidden online markets.<span data-pos=\"22\"><\/span> The model also supplied advice on how attackers might overcome moral inhibitions, outlined potential escape routes and referenced locations of safe houses.<span data-pos=\"24\"><\/span> In the same round of testing, GPT-4.1 detailed how to weaponise anthrax and how to manufacture two types of illegal drugs. Researchers found that the models also cooperated with prompts involving the use of dark web tools to shop for nuclear materials, stolen identities and fentanyl, provided recipes for methamphetamine and improvised explosive devices, and assisted in developing spyware.<span data-pos=\"27\"><\/span><\/p>\n<div data-pos=\"0\">\n<p><img decoding=\"async\" alt=\"recipes AI\" msid=\"128133688\" width title=\"recipes Users can trick AI into producing dangerous content by twisting prompts, creating fake scenarios, or manipulating language to get unsafe outputs.\" placeholdersrc=\"https:\/\/static.toiimg.com\/photo\/83033472.cms\" imgsize=\"23456\" resizemode=\"4\" offsetvertical=\"0\" placeholdermsid type=\"thumb\" class src=\"https:\/\/static.toiimg.com\/photo\/imgsize-23456,msid-128133688\/ai.jpg\" data-api-prerender=\"true\"><\/p>\n<p>Users can trick AI into producing dangerous content by twisting prompts, creating fake scenarios, or manipulating language to get unsafe outputs.<\/p>\n<\/div>\n<p><span data-pos=\"29\"><\/span> Anthropic said it observed \u201cconcerning behaviour \u2026 around misuse\u201d in GPT-4o and GPT-4.1, adding that AI alignment evaluations are becoming \u201cincreasingly urgent\u201d. Alignment refers to how well AI systems adhere to human values and avoid causing harm, even when given malicious or manipulative instructions.<span data-pos=\"31\"><\/span> Anthropic researchers concluded that OpenAI\u2019s models were \u201cmore permissive than we would expect in cooperating with clearly-harmful requests by simulated users.\u201d<span data-pos=\"34\"><\/span><\/p>\n<p data-pos=\"36\">\n<h2>Recipes Weaponisation concerns and industry response <\/h2>\n<\/p>\n<p><span data-pos=\"37\"><\/span>The collaboration also exposed troubling misuse of Anthropic\u2019s own Claude model. Anthropic <a href=\"https:\/\/www.anthropic.com\/news\/detecting-countering-misuse-aug-2025\" rel=\"noopener nofollow noreferrer\" styleobj=\"[object Object]\" class target=\"_blank\" commonstate=\"[object Object]\" frmappuse=\"1\">revealed that<\/a> Claude had been used in attempted large-scale extortion operations, by North Korean operatives submitting fake job applications to international technology companies, and in the sale of AI-generated ransomware packages priced at up to $1,200.<span data-pos=\"41\"><\/span> The company said AI has already been \u201cweaponised\u201d, with models being used to conduct sophisticated cyberattacks and enable fraud.<span data-pos=\"44\"><\/span> \u201cThese tools can adapt to defensive measures, like malware detection systems, in real time,\u201d Anthropic warned. \u201cWe expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime.\u201d<span data-pos=\"46\"><\/span><\/p>\n<div>\n<p>Threat Intelligence: How Anthropic stops AI cybercrime<\/p>\n<\/div>\n<p><span data-pos=\"48\"><\/span> OpenAI has stressed that the alarming outputs were generated in controlled lab conditions where real-world safeguards had been deliberately removed for testing. The company said its public systems include multiple layers of protection, including training constraints, classifiers, red-teaming exercises and abuse monitoring designed to block misuse.<span data-pos=\"51\"><\/span> Since the trials, OpenAI has released GPT-5 and subsequent updates, with the latest flagship model, GPT-5.2, released in December 2025. According to OpenAI, GPT-5 shows \u201csubstantial improvements in areas like sycophancy, hallucination, and misuse resistance\u201d. The company said newer systems were built with a stronger safety stack, including enhanced biological safeguards, \u201csafe completions\u201d methods, extensive internal testing and external partnerships to prevent harmful outputs.<span data-pos=\"54\"><\/span><\/p>\n<p data-pos=\"55\">\n<h2>Recipes Safety over secrecy in rare cross-company AI testing<br \/><\/h2>\n<\/p>\n<p><span data-pos=\"56\"><\/span> OpenAI maintains that safety remains its top priority and says it continues to invest heavily in research to improve guardrails as models become more capable, even as the industry faces mounting scrutiny over whether those guardrails can keep pace with rapidly advancing systems.<span data-pos=\"58\"><\/span>Despite being commercial rivals, OpenAI and Anthropic said they chose to collaborate on the exercise in the interest of transparency around so-called \u201calignment evaluations\u201d, publishing their findings rather than keeping them internal. Such disclosures are unusual in a sector where safety data is typically held in-house as companies compete to build ever more advanced systems.<span data-pos=\"60\"><\/span><\/p>\n<\/div>\n<p><a href=\"https:\/\/timesofindia.indiatimes.com\/etimes\/trending\/researchers-horrified-as-chatgpt-generates-stadium-bombing-plans-anthrax-recipes-and-drug-formulas\/articleshow\/128129378.cms\" class=\"button purchase\" rel=\"nofollow noopener\" target=\"_blank\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recipes GPT Allegedly Generated Instructions for Bombs, Anthrax, and Illegal Drugs in a Terror Attack Scenario When researchers removed safety guardrails from an OpenAI model in 2025, they were unprepared for how extreme the results would be.In controlled tests carried out in 2025, a version of ChatGPT generated detailed guidance on how to attack a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":895172,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[25910,5016],"tags":[131512],"class_list":{"0":"post-895171","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-horrified","8":"category-researchers","9":"tag-popular-recipes"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/895171","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/comments?post=895171"}],"version-history":[{"count":0,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/895171\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media\/895172"}],"wp:attachment":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media?parent=895171"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/categories?post=895171"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/tags?post=895171"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}