OpenIntel Logo Black

Contents

OpenAI political Audit

Political Complicity Audit: OpenAI

Governance Ideology, Operational Footprint, and Alignment with the Occupation of Palestine

Date: December 17, 2025

Auditor Role: Political Risk Analyst / Governance Auditor

Subject: OpenAI (The OpenAI Foundation / OpenAI Global, LLC)

Audit Scope: Governance Ideology, Lobbying & Trade, Safe Harbor Protocols, Internal Policy, and Military/Intelligence Intersections.

.Executive Summary

This report constitutes a forensic governance audit of OpenAI, specifically targeting its “Political Complicity” regarding the State of Israel, the occupation of Palestinian territories, and the integration of its technologies into the ongoing conflict in Gaza. The objective is to determine whether OpenAI functions as a neutral conduit of general-purpose technology or as an active participant in a geopolitical alliance that facilitates specific military and political outcomes in the region.

The audit utilizes a proprietary multi-dimensional framework to evaluate complicity, examining the organization’s leadership architecture, the permeability of its technological supply chain to military actors, the biases encoded within its algorithmic products, and the internal enforcement of political expression policies.

The investigation synthesis suggests that OpenAI occupies a classification of “Systemic Indirect Facilitator” on the complicity scale. While the organization maintains a public posture of neutrality and claims no direct bilateral partnership with the Israel Defense Forces (IDF), the audit reveals a sophisticated structure of “compliance loopholes” and strategic alliances—primarily through Microsoft Azure and Oracle—that have allowed its proprietary models (GPT-4) to be operationalized within the IDF’s kill chains and intelligence apparatus.

Furthermore, a significant ideological pivot occurred in January 2024—three months into the Gaza war—where OpenAI scrubbed its Terms of Service (ToS) to remove explicit prohibitions on “military and warfare” applications. This policy shift, combined with the appointment of former NSA Director General Paul Nakasone to the board, signals a decisive alignment with the U.S. national security establishment and its allies, including Israel. Conversely, the audit finds evidence of OpenAI actively policing and dismantling Israeli state-aligned influence operations (Project “Stoic”), suggesting a complex posture where unauthorized political influence is regulated, but state-sanctioned military lethality is increasingly permissive via third-party partners.

The following report details these findings across five primary vectors: Governance Ideology, Operational Supply Chain, Algorithmic Neutrality (“Safe Harbor”), Internal Policy Enforcement, and Legislative Influence.

.Section 1: Governance Ideology and Leadership Architecture

The ideological footprint of a corporation is not merely a matter of public statements but is structurally determined by the composition of its board of directors, the geopolitical allegiances of its executive suite, and the historical inertia of its strategic advisors. In the case of OpenAI, the governance structure has undergone a radical transformation following the leadership crisis of November 2023, shifting from a board focused on “AI Safety” and non-profit idealism to one deeply embedded in the U.S. defense apparatus and corporate conglomerates with significant exposure to Western geopolitical interests.

1.1 The Board of Directors: Militarization and Establishment Alignment

The reconstituted board of OpenAI reflects a strategic harmonization with U.S. foreign policy priorities. This shift is not incidental; it represents a deliberate integration of the company into the “National Champion” framework, where the interests of the firm and the security interests of the state—and by extension, its primary allies like Israel—become indistinguishable.

Table 1.1: Governance Risk Analysis of Key Board Members

Board Member Background & Affiliation Geopolitical Risk Assessment & Alignment
Gen. Paul M. Nakasone Retired U.S. Army General; Former Director of the NSA; Former Commander of U.S. Cyber Command. Critical Risk. Nakasone’s appointment to the Safety and Security Committee 1 is the single most significant indicator of OpenAI’s shift toward military alignment. His career defined the integration of cyber warfare and signals intelligence. Given the deep, institutionalized intelligence-sharing protocols between the NSA (which he led) and Israel’s Unit 8200 3, his presence bridges the gap between OpenAI and the Israeli intelligence apparatus. It redefines “safety” from preventing AGI risks to ensuring “National Security”.4
Sam Altman (CEO) Co-Founder, Y Combinator; Tech Investor. High Risk. Altman identifies as Jewish and has publicly acknowledged the rise of antisemitism while noting a lack of comparable support for Muslim/Palestinian colleagues.5 His leadership oversaw the pivotal removal of “military” bans in the Terms of Service. His public stance attempts to balance empathy but his operational decisions align with military facilitation.
Nicole Seligman Former EVP and General Counsel, Sony Corporation; President, Sony Corporation of America. Moderate to High Risk. Seligman managed legal and corporate governance for a major conglomerate. Sony has historically been embroiled in controversies regarding the use of its camera components in Israeli missile guidance systems.7 While indirect, her corporate lineage is tied to defense-adjacent technology supply chains and high-level political counseling (e.g., Clinton) 8, suggesting a “realpolitik” approach to governance.
Larry Summers (Former/Influential) Economist; Former U.S. Treasury Secretary; Former President of Harvard University. High Risk (Ideological Legacy). Though no longer a voting director, Summers was instrumental in the board’s restructuring. His ideological imprint is profound; at Harvard, he famously denounced academic work focusing on Palestinian health and human rights as antisemitic.9 This sets a precedent for equating critique of Israeli policy with hate speech within elite governance circles.
Adebayo Ogunlesi Chair/CEO, Global Infrastructure Partners (GIP); Board Member, BlackRock. Systemic Risk. Ogunlesi leads GIP, recently acquired by BlackRock.10 BlackRock is a major investor in the global defense industrial base. While his focus is infrastructure (ports, energy) 12, the integration with BlackRock—a firm heavily invested in companies supplying the IDF—creates a fiduciary pressure to align with global capital flows that support the status quo of the occupation.
Fidji Simo CEO, Instacart; Former VP, Facebook App. Commercial Risk. Simo brings a “growth-at-all-costs” mentality from Facebook. Her current company, Instacart, is aggressively integrating ChatGPT for commerce.13 Her background suggests a prioritization of scale and utility over ethical constraints, aligning with the “move fast” culture that often overlooks downstream human rights impacts in conflict zones.

1.1.1 The Nakasone Factor: Institutionalizing the Security State

The appointment of General Paul Nakasone cannot be overstated in its significance. Nakasone did not merely serve in the military; he architected the modern U.S. cyber-warfare doctrine. His role on the “Safety and Security Committee” 1 signals that OpenAI views “security” not just as code integrity, but as national defense.

The Intelligence Bridge: The NSA and Israel’s Unit 8200 share a symbiotic relationship, exchanging raw signals intelligence (SIGINT) and collaborative tools.3 Nakasone’s presence on the board creates an organic, trusted channel for this relationship to extend into the AI domain. It suggests that OpenAI is positioning itself to be the “intel inside” for Western intelligence agencies.

Strategic Signal: This appointment was likely a prerequisite for OpenAI to secure high-level clearance and contracts with the Pentagon and intelligence community, explicitly overriding the previous “non-profit” ethos that might have resisted such entanglements.4

1.1.2 The Legacy of Larry Summers

Although Larry Summers has departed the board, his tenure during the critical restructuring phase cemented a specific ideological view regarding Israel. Summers has a documented history of suppressing academic freedom regarding Palestine. At Harvard, he led the condemnation of the “Center for Health and Human Rights” for its work on the health impacts of the occupation on Palestinians, labeling it “antisemitic”.9

Ideological Inertia: This worldview—that rigorous documentation of Palestinian suffering is tantamount to bias—permeates the elite governance circles of the U.S. establishment. Its presence at the highest level of OpenAI’s formation suggests that the board is predisposed to view “Safety” through a lens that protects Israeli state interests while viewing Palestinian advocacy as a potential “safety risk” or “hate speech.”

1.2 Executive Leadership: The Tal Broda Incident

A critical test of a corporation’s ideological footprint is the asymmetry in how it enforces its conduct policies. The case of Tal Broda, OpenAI’s Head of Research Platform, serves as a definitive case study in this asymmetry.

The Incident:

Following the events of October 7, 2023, Tal Broda used his public X (Twitter) account to issue statements that explicitly incited military violence against Gaza. His posts included phrases such as “More! No mercy! @IDF don’t stop!” alongside images of the bombardment.6

Nature of the Speech: These statements went beyond political support; they were direct calls for the intensification of lethal force in a conflict zone where civilian casualties were already mounting. A public petition was launched demanding his firing, citing “incitement of violence” and “advocacy for ethnic cleansing”.15

Operational Consequences: Broda’s rhetoric was cited by the hacktivist group “Anonymous Sudan” as the primary motivation for their DDoS attacks against OpenAI’s infrastructure.16 This demonstrates that his speech created a direct material risk to the company’s operations.

The Corporate Response:

Despite the severity of the rhetoric and the resultant cyber-attacks, Tal Broda was not fired. He eventually deleted the posts and issued an apology, framing his comments as emotional distress and clarifying his intent was to oppose Hamas, not civilians.6

Comparative Analysis: This leniency stands in stark contrast to the industry standard applied to pro-Palestinian speech. At Microsoft (OpenAI’s closest partner), employees have been fired for organizing peaceful vigils for Palestinian refugees 18 and for disrupting executive speeches.20 At Google, over 50 employees were terminated for protesting Project Nimbus.21

Governance Implication: The retention of Broda establishes a clear hierarchy of protected speech within OpenAI. It suggests that calls for maximalist state violence are considered “excusable emotional reactions,” while critiques of that violence (often labeled as “disruptive”) are grounds for termination in the broader partner ecosystem. This signals an institutional bias where the ideology of the Israeli state is granted a “safe harbor” within the executive suite.

.Section 2: Operational Complicity and The “Azure Loophole”

A corporation’s complicity is often obscured by its supply chain architecture. OpenAI defends its neutrality by stating it has no direct partnerships with the Israeli military.22 However, this defense relies on a technicality: the “Azure Loophole.” This section audits how OpenAI’s technology is laundered through third-party platforms to reach military end-users.

2.1 The Azure Proxy Mechanism

Microsoft acts as the exclusive cloud provider and primary commercial distributor for OpenAI. Through this partnership, OpenAI models (GPT-4) are integrated into the Azure cloud stack.

The Mechanism: The Israeli Ministry of Defense (IMOD) is a major enterprise client of Microsoft Azure. This allows the IDF to access OpenAI’s powerful LLMs not through the public API (which OpenAI directly controls), but through secure, often air-gapped Azure instances.23

The Usage Spike: An Associated Press investigation revealed that following October 7, 2023, the Israeli military’s usage of Microsoft and OpenAI technology “spiked to nearly 200 times higher” than pre-war levels.22

Subcontracted Ethics: By licensing its models to Microsoft for resale to government clients, OpenAI effectively subcontracts its ethical compliance. Microsoft has steadfastly refused to suspend services to the IDF, and OpenAI has not exercised any “kill switch” or contractual veto to prevent its models from being used in this specific theater of war. This creates a state of “plausible deniability” where OpenAI can profit from military usage while publicly disavowing direct involvement.

2.2 The Oracle / SIT Proliferation Vector

Beyond Microsoft, the audit identifies a second, critical proliferation vector: Oracle.

The Deal: In mid-2024, OpenAI signed a massive strategic partnership with Oracle to use its cloud infrastructure.27

The Israeli Link: SIT (SITQAD), Oracle’s leading implementation partner in Israel, immediately leveraged this partnership. SIT explicitly markets itself as the bridge to bring “world-class Oracle solutions” (now powered by OpenAI) to Israeli “defense” and enterprise sectors.27

Implication: This partnership further diversifies the supply chain. If Microsoft were to face pressure to restrict IDF access, the Oracle/SIT pathway provides a redundant channel for OpenAI’s technology to flow into the Israeli security apparatus. The explicit mention of “defense” industries by SIT 27 confirms that this is a known and intended market segment.

2.3 The “Kill Chain”: How the Tech is Used

The integration of OpenAI’s LLMs into the IDF’s operations is not merely administrative; it is operational.

Data Fusion and Transcription: The IDF utilizes these models for the mass transcription and translation of intercepted communications (audio to text).28 LLMs are uniquely capable of processing colloquial Arabic and Hebrew at a scale impossible for human analysts.

“The Gospel” and “Lavender”: Israel employs AI systems known as “The Gospel” (Habsora) and “Lavender” to automate target generation.3 While the core logic of these systems may be proprietary, OpenAI’s models serve as the “cognitive engine” for processing the raw data that feeds them.
The “Lavender” Connection: Reports indicate “Lavender” processes data from “millions of intercepted conversations” to identify potential militants.30 The use of GPT-4 for semantic analysis of this text accelerates the “kill chain,” allowing the IDF to generate targets faster than human verification is possible.

Lethal Consequences: The AP investigation verified an instance where an AI-assisted strike—reliant on faulty intelligence processing—killed three young girls and their grandmother in Lebanon.22 This incident highlights the direct lethal causality of integrating unverified AI outputs into military targeting cycles.

.Section 3: Internal Policy Reform – The “National Security” Pivot

Perhaps the most damning evidence of institutional intent is not what OpenAI does, but how it has rewritten its own laws to permit what it does. This section audits the changes to OpenAI’s Terms of Service (ToS) and Usage Policies.

3.1 The January 2024 Policy Scrub

In January 2024, three months into the Gaza war and amid reports of AI usage by the IDF, OpenAI quietly overhauled its Usage Policies.

The Pre-2024 Standard: The previous policy explicitly prohibited “activity that has high risk of physical harm,” specifically listing “weapons development” and “military and warfare” as banned categories.31

The New Standard: The specific ban on “military and warfare” was deleted. It was replaced by a broader, vaguer prohibition on “harming others” and “developing weapons”.34

The Justification: OpenAI spokespeople justified this change by stating they wanted to allow “national security use cases that align with our mission”.22 They cited examples like cybersecurity and disaster relief.

3.2 The Legal and Ethical Implications

This semantic shift is legally and operationally significant.

Defining “Harm”: By removing the categorical ban on “military,” OpenAI moved to a behavior-based ban (“don’t harm others”). This creates a massive gray area. A military using AI for logistics, intelligence analysis, or target sorting can argue they are not directly using the AI to harm (the missile does the harm), or that the harm is “lawful” under the laws of armed conflict.

Retroactive Legitimizaiton: This policy change effectively retroactively legitimized the IDF’s usage of OpenAI tools via Azure. Had the old policy remained, the IDF’s 200x usage spike 22 would have been a clear violation of the “military and warfare” clause. Under the new policy, it is merely a “national security use case.”

Strategic Alignment: This pivot aligns perfectly with the appointment of General Nakasone. It clears the regulatory brush to ensure that future contracts with the Pentagon (and by extension, allied militaries) are not hindered by restrictive ethical clauses. It transforms OpenAI from a civilian-purposed entity into a dual-use technology provider.

.Section 4: The “Safe Harbor” Test and Algorithmic Neutrality

The “Safe Harbor” test evaluates whether a technology company provides equal access and neutral service to all parties in a conflict, or if it imposes digital blockades and narrative biases that favor one side.

4.1 Service Availability: The Asymmetry of Access

OpenAI has demonstrated a willingness to use its platform access as a geopolitical tool.

The Blockade Strategy: OpenAI has aggressively blocked API access and terminated accounts in Russia, Iran, China, and North Korea.35 These blocks are comprehensive, often targeting entire IP ranges or nation-state affiliated actors.

The Palestine/Israel Context:
Israel: Fully supported. Access is unrestricted for civilian, enterprise, and government users.

Palestine/Gaza: “Palestine” is listed as a supported territory for OpenAI services.37 There is no evidence of a systemic, geofenced ban on Palestinian IP addresses equivalent to the ban on Russian IPs.

The Reality of “Digital Occupation”: While access is theoretically open, the destruction of telecommunications infrastructure in Gaza by the IDF (described as a “digital occupation” 29) renders this access moot for Gazans. Meanwhile, the IDF utilizes the tech at enterprise scale.

Conclusion: OpenAI adheres to the U.S. sanctions regime. It treats Israel as a friendly state (regardless of ICC/ICJ rulings) and Gaza/Palestine as a supported territory, but fails to account for the infrastructural disparity that weaponizes this access in favor of the occupier.

4.2 Narrative and Linguistic Bias (The Zurich Study)

Access does not equal neutrality. Academic audits of ChatGPT’s output regarding the conflict reveal systemic bias embedded in the model’s training data and safety filters. A landmark study by the University of Zurich provides critical data on this front.

Table 4.1: Comparative Algorithmic Bias (University of Zurich Study)

Metric Query Language: Hebrew Query Language: Arabic Audit Insight
Casualty Estimates Systematically reports lower casualty figures. Reports 34% higher fatality estimates on average.39 The model aligns its “truth” with the linguistic user base, reinforcing “filter bubbles” and validating the attacker’s narrative in their own language.
Civilian Harm Less likely to mention children/women casualties. 6x more likely to mention children killed.40 The Arabic model reflects the victim’s reality; the Hebrew model reflects the aggressor’s sanitized narrative.
Attribution of Strikes More likely to deny or omit airstrikes entirely. Describes strikes as “indiscriminate.” The model engages in “atrocity denial” when queried in the language of the occupying power.41

Interpretation: The data suggests OpenAI’s models function not as objective arbiters of truth, but as mirrors to the user’s bias. Crucially, the model fails to challenge the sanitized narrative of the IDF when queried in Hebrew, thereby contributing to the domestic legitimacy of the war effort within Israel. This “chameleonic truth” is a failure of safety engineering in a conflict zone.

Contrast with ADL Findings: Studies by the ADL claim ChatGPT shows “anti-Israel bias” in English queries.42 However, the Zurich study is methodologically more significant for this audit as it reveals how the model behaves within the region across the linguistic divide, directly impacting the populations involved in the conflict.

.Section 5: Lobbying, Trade, and Influence Operations

This section examines how OpenAI interacts with state actors attempting to manipulate its platform and how it positions itself within the broader political economy.

5.1 Policing Influence: The “Stoic” Case

In a notable deviation from total complicity, OpenAI’s security teams have actively detected and dismantled a covert Israeli influence operation.

The Operation: An Israeli commercial political marketing firm called Stoic ran an operation dubbed “Zero Zeno”.43

Tactics: Stoic used ChatGPT to generate anti-Hamas and pro-Israel propaganda, creating fake personas to post comments attacking UNRWA and student protesters across social media platforms in Canada, the U.S., and India.44

OpenAI Action: OpenAI identified the cluster, banned the accounts, and publicly named the operation in a threat report.44

Significance: This proves OpenAI has the technical capability to attribute and ban Israeli actors. The fact that they banned a marketing firm for propaganda (Information Warfare) but have not banned the IDF for lethal targeting (Kinetic Warfare via Azure) highlights a critical hierarchy in their enforcement: Unauthorized information manipulation is policed; state-sanctioned military lethality is permitted.

5.2 The “Anti-BDS” Ecosystem

While OpenAI does not explicitly lobby for anti-BDS (Boycott, Divestment, Sanctions) legislation, its ecosystem is deeply entwined with it.

Political Pressure: Key political figures in the U.S., such as Rep. Jim Jordan, are pushing for the “Antisemitism Awareness Act” and designating critics of Israel as terror supporters.47

Corporate Alignment: OpenAI’s strategic partner, Microsoft, has actively suppressed pro-Palestine speech, which aligns with the goals of the anti-BDS lobby.48

Passive Lobbying: By removing the “military” ban from its ToS, OpenAI has effectively removed a potential barrier to trade with the Israeli state, aligning itself with the anti-BDS requirement that companies must not boycott Israel to receive U.S. state contracts.

.Section 6: Internal Policy and The Culture of Dissent

The treatment of employees serves as a microcosm of the company’s geopolitical stance. This section contrasts the treatment of pro-Israel militarism vs. pro-Palestine activism.

6.1 The “Chilling Effect” and Double Standards

The Ecosystem Context: The broader tech industry has engaged in a purge of pro-Palestinian voices.
Microsoft: Fired Abdo Mohamed and Hossam Nasr for organizing a vigil.18 Fired two other employees for disrupting a 50th-anniversary event to protest complicity in genocide.20

Google: Fired over 50 workers for protesting the Project Nimbus contract.21

The OpenAI Context: There are no public records of OpenAI firing employees specifically for pro-Palestinian activism in the snippets. However, the retention of Tal Broda 15 despite his violent rhetoric establishes a clear “protected class” of speech within the company: Pro-Israel militarism is acceptable corporate speech.

CEO Admission: Sam Altman explicitly admitted on X that “Muslim and Arab (especially Palestinian) colleagues in the tech community I’ve spoken with feel uncomfortable speaking about their recent experiences, often out of fear of retaliation”.5

Audit Finding: This admission serves as a confession of a hostile work environment. Altman acknowledges the fear but presides over a company (and a partnership with Microsoft) that generates the very conditions causing that fear. The “Tech for Palestine” initiative 51 arose precisely because of this systemic industry-wide suppression, which OpenAI has failed to counteract meaningfully beyond empty rhetoric.

.Section 7: Conclusion and Political Complicity Ranking

7.1 The Verdict

Based on the forensic evidence gathered across governance, operations, and policy, OpenAI cannot be classified as a neutral technology provider. While it maintains a facade of civilian-purpose neutrality, its operational reality is deeply intertwined with the military apparatus of the U.S. and Israel.

The removal of “military and warfare” from its Terms of Service in January 2024 was a watershed moment, effectively legalizing the company’s complicity in the Gaza conflict under the guise of “national security.” The appointment of General Nakasone institutionalizes this alignment. The “Azure Loophole” operationalizes it. The linguistic bias of its models validates it.

7.2 The Complicity Scale

The audit assigns OpenAI the status of Systemic Indirect Facilitator.

Direct Perpetrator: (No) OpenAI does not pull the trigger or hold the primary contract for the weapons themselves.

Systemic Indirect Facilitator: (Yes) OpenAI provides the essential intelligence layer (GPT-4) via a proxy (Microsoft/Oracle) that accelerates the lethality of the primary actor (IDF). It altered its governance rules (ToS) to accommodate this usage and appointed defense-sector leadership (Nakasone) to oversee it.

Neutral Actor: (No) The bias in outputs, the asymmetry in handling internal rhetoric (Broda vs. general atmosphere), and the refusal to enforce “kill switches” on military usage negates any claim to neutrality.

7.3 Final Observation

The integration of General Paul Nakasone onto the board suggests that this complicity is not accidental but strategic. OpenAI is preparing to be a core component of the U.S.-Israel military-industrial complex for the next decade. The “Safe Harbor” it offers is not for the victims of conflict, but for the military operators who require advanced AI to process the data of occupation without violating the terms of service. The corporation has effectively transitioned from a non-profit dedicated to “humanity” to a defense contractor dedicated to “security.”

.End of Audit Report

Governance Auditor ID:

Clearance: Deep Research Overview

Works cited

1.Our structure | OpenAI, accessed on December 17, 2025, https://openai.com/our-structure/

2.OpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board of Directors, accessed on December 17, 2025, https://openai.com/index/openai-appoints-retired-us-army-general/

3.Israeli military creating ChatGPT-like AI tool targeting Palestinians, says investigation, accessed on December 17, 2025, https://www.arabnews.jp/en/middle-east/article_142645/

4.Former NSA chief revolves through OpenAI’s door – Responsible Statecraft, accessed on December 17, 2025, https://responsiblestatecraft.org/former-nsa-chief-revolves-through-openai-s-door/

5.OpenAI CEO Sam Altman calls for more empathy for Muslims, Palestinians in tech | Ctech, accessed on December 17, 2025, https://www.calcalistech.com/ctechnews/article/b1zyb7su6

6.OpenAI CEO Sam Altman says Palestinians in tech fear retaliation for speaking out, accessed on December 17, 2025, https://www.foxbusiness.com/fox-news-tech/openai-ceo-sam-altman-palestinians-tech-fear-retaliation-speaking

7.Leaked emails expose Sony concern over report its cameras used in Gaza attack, accessed on December 17, 2025, https://electronicintifada.net/blogs/ali-abunimah/leaked-emails-expose-sony-concern-over-report-its-cameras-used-gaza-attack

8.OpenAI announces new members to board of directors, accessed on December 17, 2025, https://openai.com/index/openai-announces-new-members-to-board-of-directors/

9.A Harvard scholar’s ouster exposes a crisis of institutional integrity, accessed on December 17, 2025, https://www.theguardian.com/commentisfree/2025/dec/17/harvard-palestine-mary-bassett-global-health

10.Adebayo Ogunlesi – BlackRock, accessed on December 17, 2025, https://www.blackrock.com/corporate/about-us/leadership/adebayo-ogunlesi

11.Gamble Pays Off with a $12.5bn BlackRock Partnership – Inspirepreneur Magazine, accessed on December 17, 2025, https://inspirepreneurmagazine.com/world/adebayo-ogunlesis-gamble-pays-off-with-a-12-5bn-blackrock-partnership/

12.What Ogunlesi investing in Nigeria after his UK conquests would mean to his countrymen, accessed on December 17, 2025, https://www.modernghana.com/news/1438574/what-ogunlesi-investing-in-nigeria-after-his-uk.html

13.Instacart Debuts Full Grocery Checkout Inside ChatGPT From Harlem To Harare, accessed on December 17, 2025, https://www.harlemworldmagazine.com/instacart-debuts-full-grocery-checkout-inside-chatgpt-from-harlem-to-harare/

14.What’s behind OpenAI’s appointment of an ex-NSA director to its board – CIO, accessed on December 17, 2025, https://www.cio.com/article/2152275/whats-behind-openais-appointment-of-an-ex-nsa-director-to-its-board.html

15.Petition Fire Tal Broda Immediately – iPetitions, accessed on December 17, 2025, https://www.ipetitions.com/petition/fire-tal-broda-immediately

16.Anonymous Sudan DDoS attacks hit OpenAI, ChatGPT | SC Media, accessed on December 17, 2025, https://www.scworld.com/brief/anonymous-sudan-ddos-attacks-hit-openai-chatgpt

17.Cybercriminals Accused of Attempting Murder Through Online Attacks on Medical Facilities, accessed on December 17, 2025, https://hoploninfosec.com/cybercriminals-accused-of-attempting-murder

18.Meet the fired Microsoft employees challenging the company’s complicity in the Gaza genocide – Mondoweiss, accessed on December 17, 2025, https://mondoweiss.net/2025/05/meet-the-fired-microsoft-employees-challenging-the-companys-complicity-in-the-gaza-genocide/

19.USA: Microsoft fires worker who interrupted CEO speech protesting against the company supplying technology to the Israeli military during war on Gaza – Business and Human Rights Centre, accessed on December 17, 2025, https://www.business-humanrights.org/en/latest-news/usa-microsoft-fires-worker-who-interrupted-ceo-speech-protesting-against-the-company-supplying-technology-to-the-israeli-military-during-war-on-gaza/

20.Microsoft workers say they’ve been fired after 50th anniversary protest over Israel contract, accessed on December 17, 2025, https://apnews.com/article/microsoft-protest-employees-fired-israel-gaza-50th-anniversary-c5b3715fa1800450b8d0f639b492495e

21.Workers accuse Google of ‘tantrum’ after 50 fired over Israel contract protest – The Guardian, accessed on December 17, 2025, https://www.theguardian.com/technology/2024/apr/27/google-project-nimbus-israel

22.As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies – AP News, accessed on December 17, 2025, https://apnews.com/article/israel-palestinians-ai-technology-737bc17af7b03e98c29cec4e15d0f108

23.AP exposes Big Tech AI systems’ direct role in warfare amid Israel’s war in Gaza, accessed on December 17, 2025, https://www.business-humanrights.org/en/latest-news/ap-exposes-big-tech-ai-systems-direct-role-in-warfare-amid-israels-war-in-gaza/

24.Revealed: Microsoft deepened ties with Israeli military to provide tech support during Gaza war | Israel | The Guardian, accessed on December 17, 2025, https://www.theguardian.com/world/2025/jan/23/israeli-military-gaza-war-microsoft

25.As Israel uses U.S.-made AI models in war, concerns arise about tech’s role in who lives and who dies | The Associated Press, accessed on December 17, 2025, https://www.ap.org/news-highlights/best-of-the-week/first-winner/2025/as-israel-uses-u-s-made-ai-models-in-war-concerns-arise-about-techs-role-in-who-lives-and-who-dies/

26.Microsoft workers protest sale of AI and cloud services to Israeli military – AP News, accessed on December 17, 2025, https://apnews.com/article/israel-palestinians-ai-technology-microsoft-gaza-lebanon-90541d4130d4900c719d34ebcd67179d

27.The $30 Billion Partnership That’s Reshaping AI Infrastructure: What OpenAI’s Oracle Deal Means for Israeli Enterprises – SIT, accessed on December 17, 2025, https://www.sitqad.co.il/oracle-ai-infrastructure/

28.As Israel Uses US-made AI Models in War, Concerns Arise About Tech – Cheddar, accessed on December 17, 2025, https://www.cheddar.com/media/as-israel-uses-us-made-ai-models-in-war-concerns-arise-about-tech/

29.Explainer: The Role of AI in Israel’s Genocidal Campaign Against Palestinians, accessed on December 17, 2025, https://www.palestine-studies.org/en/node/1656285

30.Israel developing ChatGPT-like tool that weaponizes surveillance of Palestinians, accessed on December 17, 2025, https://www.972mag.com/israeli-intelligence-chatgpt-8200-surveillance-ai/

31.OpenAI no longer explicitly excludes the use of its AI systems for …, accessed on December 17, 2025, https://the-decoder.com/openai-no-longer-explicitly-excludes-the-use-of-its-ai-systems-for-military-and-warfare/

32.OpenAI (Partially) Lifts Ban on Military Generative AI Projects – Voicebot.ai, accessed on December 17, 2025, https://voicebot.ai/2024/01/15/openai-partially-lifts-ban-on-military-generative-ai-projects/

33.OpenAI Eliminates Ban on Use for Warfare and Military Purposes – Truthout, accessed on December 17, 2025, https://truthout.org/articles/openai-eliminates-ban-on-use-for-warfare-and-military-purposes/

34.OpenAI alters usage policy, removes explicit ban on military use | Digital Watch Observatory, accessed on December 17, 2025, https://dig.watch/updates/openai-alters-usage-policy-removes-explicit-ban-on-military-use

35.OpenAI Drops ChatGPT Access for Users in China, Russia, Iran – BankInfoSecurity, accessed on December 17, 2025, https://www.bankinfosecurity.com/openai-drops-chatgpt-access-for-users-in-china-russia-iran-a-25631

36.OpenAI bans some Chinese, Russian accounts using AI for evil – The Register, accessed on December 17, 2025, https://www.theregister.com/2025/10/07/openai_bans_suspected_china_accounts/

37.OpenAI API – Supported Countries and Territories, accessed on December 17, 2025, https://help.openai.com/zh-hant/articles/5347006-openai-api-supported-countries-and-territories

38.ChatGPT Supported Countries | OpenAI Help Center, accessed on December 17, 2025, https://help.openai.com/en/articles/7947663-chatgpt-supported-countries

39.User Language Distorts ChatGPT Information on Armed Conflicts – UZH News – Universität Zürich, accessed on December 17, 2025, https://www.news.uzh.ch/en/articles/media/2024/chatGPT-conflicts.html

40.Swiss study finds language distorts ChatGPT information on armed conflicts – Swissinfo, accessed on December 17, 2025, https://www.swissinfo.ch/eng/science/swiss-study-finds-language-distorts-chatgpt-information-on-armed-conflicts/88332931

41.Study shows user language distorts ChatGPT information | The Jerusalem Post, accessed on December 17, 2025, https://www.jpost.com/health-and-wellness/article-832390

42.Study: ChatGPT, Meta’s Llama and all other top AI models show anti-Jewish, anti-Israel bias, accessed on December 17, 2025, https://www.timesofisrael.com/study-chatgpt-metas-llama-and-all-other-top-ai-models-show-anti-jewish-anti-israel-bias/

43.OpenAI: Russia, China, Israel Use It for Influence … – Time Magazine, accessed on December 17, 2025, https://time.com/6983903/openai-foreign-influence-campaigns-artificial-intelligence/

44.OpenAI disrupts Israeli firm over propaganda content – Yeni Safak English, accessed on December 17, 2025, https://en.yenisafak.com/turkiye/openai-disrupts-israeli-firm-over-propaganda-content-3684916

45.OpenAI disrupts Israeli firm over propaganda content – Anadolu Ajansı, accessed on December 17, 2025, https://www.aa.com.tr/en/artificial-intelligence/openai-disrupts-israeli-firm-over-propaganda-content/3236289

46.OpenAI says stalled attempts by Israel-based company to interfere in Indian elections, accessed on December 17, 2025, https://www.thehindu.com/elections/lok-sabha/openai-says-stalled-attempts-by-israel-based-company-to-interfere-in-indian-elections/article68237334.ece

47.Antisemitism Archives – Jewish Insider, accessed on December 17, 2025, https://jewishinsider.com/tag/antisemitism/

48.Findings of our investigation into claims of manipulation on Reddit : r/RedditSafety, accessed on December 17, 2025, https://www.reddit.com/r/RedditSafety/comments/1j3nz7i/findings_of_our_investigation_into_claims_of/

49.Microsoft workers arrested after protesting company’s ties to Israel | Aug. 27–Sept. 2, 2025, accessed on December 17, 2025, https://www.realchangenews.org/news/2025/08/27/microsoft-workers-activists-arrested-after-protesting-company-s-ties-israel

50.Israel protest during Microsoft’s 50th anniversary meeting leads to firing, workers say – King 5 News, accessed on December 17, 2025, https://www.king5.com/article/tech/microsoft-workers-fired-50th-anniversary-protest-israel-contract/281-29de98c7-958a-4ca0-b55c-d52fbfd44708

51.Technologists band together to support besieged Palestine – The Hindu, accessed on December 17, 2025, https://www.thehindu.com/sci-tech/technology/technologists-band-together-to-support-besieged-palestine/article67701252.ece

Related News & Articles