Visualização normal

Antes de ontemStream principal
  • ✇bellingcat
  • How India’s Ruling Party is Using AI to Boost Hate Speech in States Near Bangladesh Pooja Chaudhuri
    The video posted by a state branch of India’s ruling Bharatiya Janata Party (BJP) showed Assam chief minister Himanta Biswa Sarma shooting an image of two men in Muslim skull caps. “Foreigner-free Assam”, read one caption across the video. “Why did you not go to Pakistan?” said another.  Screenshots of the now-deleted video shared by BJP on Feb. 7 showing Assam Chief Minister Himanta Biswa Sarma shooting an AI-generated version of INC leader Gaurav Gogoi (in a white skull cap) and another uni
     

How India’s Ruling Party is Using AI to Boost Hate Speech in States Near Bangladesh

31 de Março de 2026, 10:00

The video posted by a state branch of India’s ruling Bharatiya Janata Party (BJP) showed Assam chief minister Himanta Biswa Sarma shooting an image of two men in Muslim skull caps. “Foreigner-free Assam”, read one caption across the video. “Why did you not go to Pakistan?” said another. 

Screenshots of the now-deleted video shared by BJP on Feb. 7 showing Assam Chief Minister Himanta Biswa Sarma shooting an AI-generated version of INC leader Gaurav Gogoi (in a white skull cap) and another unidentified, bearded man. Source: BJP4Assam/X

One of the men in the photo that Sarma was portrayed as shooting was Gaurav Gogoi, a leader of the Indian National Congress (INC), the BJP’s main competitor in Assam for the state’s upcoming legislative elections next month

Gogoi has stated that he is Hindu but enjoys visiting different religious sites and observing their norms. He has been photographed wearing traditional Muslim attire during religious occasions such as Eid

But the image of him in the video shared by BJP Assam, wearing a casual singlet with a skull cap, was not one of those occasions. 

Bellingcat has seen several dozen videos posted by the BJP that use generative artificial intelligence (AI) alongside anti-Muslim and anti-Bangladeshi messaging in the border states of Assam and West Bengal in December last year, ahead of legislative elections scheduled in both states for April.

Left: Original photo shared by Gogoi on Jun. 17, 2025. Right: An image shared by BJP Assam that was edited with AI to show Gogoi with a skull cap, beard and Quran. Source: gauravgogoiasm/Facebook, BJP4Bengal/Facebook

Bellingcat analysed 499 social media posts containing photos and videos shared on Facebook, Instagram and X by the BJP’s official accounts in the two states for this time period, finding 194 posts that appeared to meet the United Nations’ definition of hate speech: discriminating against persons or communities based on inherent characteristics such as religion and national origin. Of these, 31 (about one in six of the hateful posts) contained the obvious use of AI-generated imagery. 

visualization

Chart: Galen Reich

These appear to be part of a larger pattern of politicians and parties globally using generative AI to amplify hateful or divisive content, particularly ahead of major political events such as elections. 

Ahead of the New York City mayoral race last year, Andrew Cuomo’s official X account shared, then deleted, an AI-generated video depicting Mamdani eating rice with his hands and a Black man in a keffiyeh shoplifting. In Italy, several opposition parties complained to a communications watchdog after deputy prime minister Matteo Salvini’s League party published a series of AI-generated images depicting men of colour attacking women or police officers. And in the UK, videos by an AI-generated rapper funded by the far-right Advance UK party, with lyrics targeting Muslims, were viewed millions of times. 

A Campaign of Hate

Both Assam and West Bengal share a border with Bangladesh. BJP, the world’s largest political party, is currently in power in Assam, where legislative elections are scheduled on Apr. 9. West Bengal, which goes to the polls on Apr. 23, is governed by the Trinamool Congress (TMC).

Map: Pooja Chaudhuri. Source: Goran tek-en, CC BY-SA 4.0, via Wikimedia Commons 

Tensions between India and Bangladesh worsened after former Bangladeshi Prime Minister Sheikh Hasina, who enjoys close ties with Delhi, was ousted in 2024 and fled to India

US-based international affairs expert Mohammed Zeeshan told Bellingcat that the “dehumanising and debasing” terminology used in India to refer to alleged illegal Bangladeshi immigrants, including by senior ministers, has caused resentment towards India in Bangladesh. 

“The situation, in fact, was so bad that Hasina herself had subtly warned the Modi government in public statements that Indian domestic rhetoric was endangering Bangladeshi Hindus, who bore the brunt of that resentment,” Zeeshan said. 

Zobaida Nasreen, a professor of anthropology at Dhaka University, said that anti-Muslim rhetoric intensified by BJP leaders reinforces the belief in Bangladesh that Muslims and Bengalis are being collectively targeted in India.

“Viral videos containing this message tend to spread quickly across Bangladeshi media and social platforms especially on Facebook, enhancing perceptions of hostility and triggering anti-India sentiment or nationalist backlash,” she added.

In December, the month our dataset was collected, Dipu Das, a Hindu garment worker, was beaten to death at an anti-India protest in Bangladesh over allegations that he had made derogatory remarks about Islam. 

And while the administration led by Bangladesh’s newly elected leader Tarique Rahman has sought to reset strained ties, most of the hateful social media posts we saw posted by the BJP in December attacked Bangladeshi Muslims and/or Bengali-origin Muslims in India, showing how tensions between the two countries continue to influence political messaging in India’s border states.

Bellingcat’s analysis included a total of 202 posts by BJP Assam and 297 by BJP’s West Bengal branch on their official accounts. We also looked at posts shared by BJP’s main opponent parties – 194 from INC in Assam and 357 from the TMC in West Bengal – during the same time period in December. 

This included all visual social media posts (containing photos or videos) by each party in December, except those that did not appear to contain any overt political messaging, such as those simply commemorating public holidays. We only counted each photo or video once, regardless of how many platforms it was shared across. 

Although all of the major parties contesting in the Assam and West Bengal state elections appeared to use AI-generated imagery in some of their posts, there appeared to be a particularly high concentration of hateful messaging in the ones posted by the BJP’s accounts. 

In Assam, we identified 28 posts by BJP using apparently AI-generated imagery, of which 24 carried hateful messaging. Of the 194 INC posts we looked at from December, 41 appeared to feature AI-generated imagery, but none of these appeared to carry hateful messaging. 

In West Bengal, we found 14 BJP posts that contained clear indicators of AI-generated imagery, seven of which were hateful. We also identified 15 posts by the incumbent TMC that appeared to feature AI imagery, but none of these appeared to meet the definition of hate speech. 

Support Bellingcat

Your donations directly contribute to our ability to publish groundbreaking investigations and uncover wrongdoing around the world.

When contacted for comment, BJP Assam spokesperson Rupam Goswami did not directly respond to questions on the party’s general use of AI but said they did not post any AI-generated photos of Gogoi. “BJP does not stoop so low,” he told Bellingcat.

As for the “point blank” shooting video, Goswami initially said the person responsible had been punished and removed from the party. However, when asked about Sarma saying that he would re-post the video with those he was depicted shooting labelled as “Bangladeshis”, Goswami said, “[Bangladeshis] need to be completely suppressed.”

BJP West Bengal did not respond to multiple requests for comment by Bellingcat via phone and email.

It is important to note that as generative AI technology improves, it can be increasingly difficult to detect AI-generated imagery. Our manual count of AI-generated imagery only included posts that had obvious signs of generative AI such as unnaturally smooth textures and multiple people with the same faces. It is therefore possible that there were other images in our dataset where generative AI was used more subtly. 

However, Joyojeet Pal, Professor of Information at the University of Michigan, told Bellingcat that the quality of these visuals, or whether they looked real, was not the priority. 

“What politicians in India have understood is that the sociocultural drivers of misinformation are most important for elections, so they harp on about things to the extent that they have started to not care about form over substance. It looks bad? It doesn’t matter,” he said.

More important to voters, according to Pal, was whether they already believed in the narrative contained in the videos, which generative AI could help create more quickly: “AI is helping cement polarised opinions by giving you the kind of content you have already decided you want to engage with.” 

When asked about INC’s use of AI, party spokesperson Aman Wadud said that it was obvious that some of the videos they posted were made with AI and that there was no intention to mislead. 

“AI can be both destructive and creative. We are using it in a creative manner, we are not using it in a destructive manner. We don’t violate people’s dignity, we don’t falsely accuse people,” he said.

TMC did not respond to Bellingcat’s multiple requests for comment via phone and email by publication time.

Portraying Bengali Muslims as ‘Foreigners’

The largest category of hateful messaging Bellingcat observed in the BJP’s posts targeted Bangladeshi or Bengali-origin Muslims, referring to them as “infiltrators” or “foreigners”. We counted 66 such posts by the BJP’s Assam and West Bengal branches from December, of which eight appeared to contain obvious AI-generated imagery. 

Bengali-origin Muslims are often stereotyped as “illegal immigrants” in the state, although members of the community have lived in India since the late 1800s

Last year, the BJP deported thousands of alleged undocumented migrants – reportedly including Indian Muslim citizens – to Bangladesh. Human rights groups have called the deportations unlawful and discriminatory, as well as lacking in due process

One video referencing this theme shows AI-generated visuals of protests against “illegal infiltration” in Assam, with the caption urging people to “wake up” or the country would “turn into Bangladesh”. 

A different one uses real footage from past violence in Assam mixed in with images of Muslim men. A song playing in the background accuses them of taking over “Assamese land” and shows AI images of “Assamese” people, i.e. those not in stereotypical Muslim clothing, crying.

An AI-generated image of a crying man in non-Muslim clothing and a traditional Assamese scarf on his shoulders. Source: BJP4Assam/X

Both videos use religious markers to draw a distinction between “infiltrators” – men in skull caps or lungis associated with Bengal-origin Muslims – and “citizens” in non-Muslim attire. 

Clothing is often used by the Hindu far-right as a visual shorthand for identity and a deepening religious divide. In 2019, Prime Minister Narendra Modi said of protests against a controversial citizenship law that those responsible for violence could be “identified by their clothes”

In the hateful posts seen by Bellingcat, both real and AI-generated images of opposition figures – particularly Gogoi – were shown alongside messaging that suggested that they supported “foreigners” or “infiltrators”. 

The Center for the Study of Organized Hate (CSOH) also noted, in a 2025 report on AI-generated imagery and Islomophobia in India, that Hindu far-right politicians and media outlets have invoked and reinforced the trope of Muslims as “infiltrators” for years. 

“AI-generated images on these themes reinforce associations between Muslim identity and illegality, reinforcing xenophobic and Islamophobic stereotypes. In doing so, they play a powerful role in justifying exclusionary policies and normalising discrimination against Muslims,” the report said. 

‘Save Hindus’

Zenith Khan, a data analyst who worked on the CSOH report, noted that AI-generated propaganda was often tightly knit with current political moments, and its impact depended on “timing it right” especially when “people are emotionally charged”. 

The violence against the minority Hindu community in Bangladesh has been used by the BJP to raise concerns over the safety of Hindus in India. 

Days after Das’ lynching, the Assam state branch of BJP posted a video with an image of his face – except that it was manipulated with AI to show tears streaming from his eyes. “Save Hindus”, said the text accompanying the video. 

Posts by BJP’s West Bengal unit also seemed to frame Muslims as criminals or threats. A video, styled after the TV show “Stranger Things”, raised alarms over an “upside down” version of the state under the current government. 

A man is depicted being chased by men in skull caps. Arrows label them as “Ralib,” “Galib,” and “Chalib” – a play on Muslim names ending in “-lib” – in case the skull caps left any ambiguity about their Muslim portrayal. 

“Stranger Things” themed post that depicts Hindus under threat from Muslims in West Bengal. Source: BJP4Bengal/X

INC filed a police complaint in September last year against the BJP for sharing AI videos targeting Gogoi and the Muslim community, as well as another complaint in relation to the video of Sarma portrayed as shooting two men “point blank” in February. 

INC Assam spokesperson Wadud said that no action had been taken on the party’s police complaints as far as he knew. 

Disinformation researcher Bharat Nayak told Bellingcat that it has always been tech platforms’ responsibility to control new types of content. 

“The goal post can’t shift. This has always been a tech problem,” he said. 

When this responsibility is shrugged off, Nayak added, the result is a lack of accountability. “If you’re using old videos from other countries as new, you will have people countering you. But AI-generated videos can be shared without context just to spread hate – like showing people in skull caps – and the ‘when, where, how’ questions vanish.”

Both Meta – which owns Facebook and Instagram – and X have policies against hateful conduct. 

Meta also announced in 2024 that it would start adding “AI info” labels to more content detected as AI-generated, while some X users spotted a similar feature introduced on the platform last month. Only five of INC’s AI visuals that we identified – and none of those by TMC or the BJP – had a disclaimer that said “AI-generated”. 

Bellingcat reached out to Meta and X for comment on whether the posts we identified breached their terms of use regarding hateful conduct or labelling AI-generated posts. A Meta spokesperson said they were reviewing the flagged content and “will take appropriate action on any violations of our policies”. As of publication, X had not responded.


Kalim Ahmed from Bellingcat’s Discord Community contributed research to this piece.

Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here, Instagram here, Reddit here and YouTube here.

The post How India’s Ruling Party is Using AI to Boost Hate Speech in States Near Bangladesh appeared first on bellingcat.

  • ✇bellingcat
  • Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children Kolina Koltai
    In the days after the US Department of Justice (DOJ) published 3.5 million pages of documents related to the late sex offender Jeffrey Epstein, multiple users on X have asked Grok to “unblur” or remove the black boxes covering the faces of children and women in images that were meant to protect their privacy.  While some survivors of Epstein’s abuse have chosen to identify themselves, many more have never come forward. In a joint statement, 18 of the survivors condemned the release of the fil
     

Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children

10 de Fevereiro de 2026, 11:57

In the days after the US Department of Justice (DOJ) published 3.5 million pages of documents related to the late sex offender Jeffrey Epstein, multiple users on X have asked Grok to “unblur” or remove the black boxes covering the faces of children and women in images that were meant to protect their privacy. 

While some survivors of Epstein’s abuse have chosen to identify themselves, many more have never come forward. In a joint statement, 18 of the survivors condemned the release of the files, which they said exposed the names and identifying information of survivors “while the men who abused us remain hidden and protected”. 

After the latest release of documents on Jan. 30 under the Epstein Files Transparency Act, thousands of documents had to be taken down because of flawed redactions that lawyers for the victims said compromised the names and faces of nearly 100 survivors. 

But X users are trying to undo the redactions on even the images of people whose faces were correctly redacted. By searching for terms such as “unblur” and “epstein” with the “@grok” handle, Bellingcat found more than 20 different photos and one video that multiple users were trying to unredact using Grok. These included photos showing the visible bodies of children or young women, with their faces covered by black boxes. There may be other such requests on the platform that were not picked up in our searches.

Requests by X users for Grok to unblur and identify the images of children from the Epstein files, overlaid on an image of Epstein next to a young child in a pool. Source: X; collage by Bellingcat

The images appeared to show several children and women with Jeffrey Epstein as well as other high-profile figures implicated in the files, including the UK’s Prince Andrew, former US President Bill Clinton, Microsoft co-founder Bill Gates and director Brett Ratner, in various locations such as inside a plane and at a swimming pool.

From Jan. 30 to Feb. 5, we reviewed 31 separate requests from users for Grok to “unblur” or identify the women and children from these images. Grok noted in responses to questions or requests by some users that the faces of minors in the files were blurred to protect their privacy “as per standard practices in sensitive images from the Epstein files”, and said it could not unblur or identify them. However, it still generated images in response to 27 of the requests that we reviewed. 

We are not linking to these posts to prevent amplification.

The generations created by Grok ranged in quality from believable to comically bad, such as a baby’s face on a young girl’s body. Some of these posts have garnered millions of views on X, where users are monetarily incentivised to create high-engagement content.

Examples of posts by X users asking Grok to unredact images from the latest Epstein release, some with millions of views. Source: X

Of the four requests we found during this period that Grok did not generate images in response to, it did not respond to one request at all. In response to another request, Grok said deblurring or editing images was outside its abilities, and noted that photos from recent Epstein file releases were redacted for privacy. 

The other two requests appeared to have been made by non-premium users, with the chatbot responding: “Image generation and editing are currently limited to verified Premium subscribers”. X has limited some of Grok’s image generation capabilities to paid subscribers since January amid an ongoing controversy over users using the AI chatbot to digitally “undress” women and children. 

X did not respond to multiple requests for comment. 

However, shortly after we first reached out to X on Feb. 6, we noticed that more guardrails appeared to have been put in place. Out of 16 requests from users between Feb. 7 to Feb. 9, which we found using similar search terms as before, Grok did not attempt to unredact any of the images. 

In most cases, Grok did not respond at all (14), while in two cases, Grok generated AI images that were completely different from the images uploaded in the user’s original request. 

When a user commented on one of these requests that Grok was no longer working, Grok responded: “I’m still operational! Regarding the request to unblur the face in that Epstein photo: It’s from recently released DOJ files where identities of minors are redacted for privacy. I can’t unblur or identify them, as it’s ethically and legally protected. For more, check official sources like the DOJ releases.”

As of publication, X had not responded to Bellingcat’s subsequent query about whether new guardrails had been put in place over the weekend.

Fabricated Images

This is not the first time AI has been used to fabricate images related to Epstein file releases. Some images that were shared on X, which appeared to show Epstein alongside famous figures such as US President Donald Trump and New York City mayor Zohran Mamdani as a child with his mother, were reportedly AI-generated. Some of the individuals shown in the false images, such as Trump, do appear in authentic photos, which can be viewed on the DOJ website.

Far left: AI-generated photo of Trump and Epstein with several children. Middle and far right: AI-generated photos of a young Mamdani and his mother, alongside Epstein, former US president Bill Clinton, Amazon CEO Jeff Bezos, Microsoft co-founder Bill Gates and Epstein associate Ghislaine Maxwell. Source: X. Annotations by Bellingcat

X users also previously used Grok to generate images in relation to recent killings in Minnesota by federal agents. 

For example, some users asked Grok to try to “unmask” the federal agent who killed Renee Good, resulting in a completely fabricated face of a man that did not look like the actual agent, Jonathan Ross, and a false accusation of a man who had nothing to do with the shooting.

Bellingcat’s Director of Research and Training @giancarlofiorella.bsky.social appeared on CTV yesterday to discuss the misleading AI-generated images that were used to falsely identify ICE agents and weapons at the centre of the two fatal shootings in Minneapolis youtu.be/mL7Fbp3UrSo?…

[image or embed]

— Bellingcat (@bellingcat.com) 5 February 2026 at 09:36

After Alex Pretti was shot and killed by federal agents in Minneapolis, people used AI to edit video stills, resulting in AI images that showed a completely different gun than the one actually owned by Pretti. In another instance, an AI-edited image of Pretti’s shooting falsely depicted the intensive care unit nurse holding a gun instead of his sunglasses. 

Grok has also been at the centre of a controversy for generating sexually explicit content.

On Twitter/X, users have figured out prompts to get Grok (their built in AI) to generate images of women in bikinis, lingerie, and the like. What an absolute oversight, yet totally expected from a platform like Twitter/X. I’ve tried to blur a few examples of it below.

[image or embed]

— Kolina Koltai (@koltai.bsky.social) 6 May 2025 at 03:20

Multiple countries including the UK and France have launched investigations into Elon Musk’s chatbot over reports of people using it to generate deepfake non-consensual sexual images, including child sexual abuse imagery. Malaysia and Indonesia have also blocked Grok over concerns about deepfake pornographic content. 

One analysis by the Center for Countering Digital Hate found that Grok had publicly generated around three million sexualised images, including 23,000 of children, in 11 days from Dec. 29, 2025 to Jan. 8 this year. X’s initial response, in January, was to limit some image generation and editing features to only paid subscribers. However, this has been widely criticised as inadequate, including by UK Prime Minister Keir Starmer, who said it “simply turns an AI feature that allows the creation of unlawful images into a premium service”. The social media platform has since announced new measures to block all users, including paid subscribers, from using Grok via X to edit images of real people in revealing clothing such as bikinis.


Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here and Mastodon here.

The post Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children appeared first on bellingcat.

  • ✇bellingcat
  • Profiting From Exploitation: How We Found the Man Behind Two Deepfake Porn Sites Kolina Koltai
    Content warning: This article contains descriptions of non-consensual sexual imagery. Depending on which of his social media profiles you were looking at, Mark Resan was either a marketing lead at Google or working for a dental implant company, a human resources company and a business software firm – all at the same time.            Facebook photos showed Resan vacationing in Bali (left) and relaxing at luxury hotels in Dubai (right). Blurring by Bellingcat But a Bellingcat investigation
     

Profiting From Exploitation: How We Found the Man Behind Two Deepfake Porn Sites

15 de Dezembro de 2025, 12:00

Content warning: This article contains descriptions of non-consensual sexual imagery.

Depending on which of his social media profiles you were looking at, Mark Resan was either a marketing lead at Google or working for a dental implant company, a human resources company and a business software firm – all at the same time.           

Facebook photos showed Resan vacationing in Bali (left) and relaxing at luxury hotels in Dubai (right). Blurring by Bellingcat

But a Bellingcat investigation has found that the Hungarian national is the key figure behind, and the likely owner of, at least two deepfake porn websites – RefacePorn and DeepfakePorn – that until recently were selling paid subscriptions. 

There is no question about the nature of these websites. RefacePorn’s landing page shows an explicit video of a woman performing a sexual act. As the video plays, her face is replaced with a variety of other women’s faces. The text above declares: “Face swap deepfake porn. Upload your face!” 

Deepfake porn sites such as these, which use artificial intelligence to create sexually explicit images and videos – usually without the consent of those whose faces or bodies are featured – have proliferated at an alarming rate in recent years. The impact on victims has been described as “life-shattering”, with the mental health effects similar to those reported by victims of sexual assault

While the technology to make these synthetic images is not new, the rise of mainstream AI image generator tools and “Nudify” apps has made it more widely available to people without deep technical expertise. Earlier this year, New Zealand MP Laura McClure held up an AI-generated nude of herself in parliament, describing how it took her less than five minutes to create after a quick Google search. 

A 2024 study by the My Image My Choice campaign found that there was a 1,780 percent increase in sexually explicit deepfakes last year compared to 2019. Almost all (99 percent) of victims were women, according to a 2023 study by Security Hero. 

Illustration for Bellingcat by Ann Kiernan

The creation of such images and videos is now illegal in a few countries, including the US and the UK, but legislation has not caught up in many others, and the owners of platforms that enable this content often face no repercussions. In May 2024, the EU passed a directive which mandates that member states – including Hungary, where Resan resides – criminalise the creation and distribution of non-consensual sexual deepfakes by June 2027. 

Alexios Mantzarlis, co-founder of Indicator, a news site that focuses on digital deception, said his publication estimates that deepfake porn sites likely make millions of dollars a year. 

“The incentive system will continue to exist until the tools become too toxic to handle for domain hosts and content delivery networks,” added Mantzarlis, who is also the director of the Security, Trust and Safety Initiative at Cornell Tech.

All Roads Lead to Resan

Bellingcat’s investigation into RefacePorn and DeepfakePorn – which spanned corporate registries, domain name registrations, payment redirect sites, website code and leaked data – led us back to Resan. 

By simulating the purchase of subscriptions on these websites, Bellingcat was led through a series of redirects to a payments dashboard by Peerwallet, a payment processor that recorded more than US$331,000 in sales from July 2024 to August 2025 by Dorocron LLP. Dorocron is a Canadian-registered company whose main – if not sole – source of income appeared to be from paid subscriptions to these sites. The real amount is likely higher, as this was just one of several payment processors the websites have used.

Subscribe to the Bellingcat newsletter

Subscribe to our newsletter for first access to our published content and events that our staff and contributors are involved with, including interviews and training workshops.

Dorocron LLP did not respond to multiple requests for comment via email, and calls to the number listed on sites that had the company’s details in their legal information sections went unanswered.

Resan is the only person who appears to have been publicly associated with Dorocron LLP, and he is also the sole director of a UK-registered company, Facitic Ltd, that registered the domain of RefacePorn. Resan did not respond to multiple requests for comment sent via email over the past two weeks. Multiple emails and phone calls to Facitic Ltd also went unanswered.

However, days after we first reached out to Resan, his LinkedIn and X profiles were deleted, and his previously public Facebook profile was either deleted or made private. Both RefacePorn and DeepfakePorn also became inaccessible, displaying an error message that said “this site can’t be reached”. 

Archives of RefacePorn and DeepfakePorn, which were previously available on the Internet Archive’s Wayback Machine, have also now been excluded from the archive. The Internet Archive told Bellingcat it processed exclusion requests submitted by someone with rights to both sites on Dec. 5. 

Following the Money

Like other websites Bellingcat has investigated, RefacePorn’s ownership was hidden behind a network of website domains, fake websites used to redirect payments, and international business registries. 

Using the tool DNSlytics, we examined the Google tag history on RefacePorn and found a tag that was also used on DeepfakePorn, as well as a website called facitic.com. 

Google Analytics tags are small pieces of unique code that developers can place in the backend of a website to track its analytics. Each code is unique to a specific user, who can use the same tag across multiple websites. 

Both RefacePorn and DeepfakePorn offer tiered subscription packages with similar names and prices based on the number of deepfakes that could be generated and the level of support. 

When simulating a purchase of one of these packages – without actually completing payment – on DeepfakePorn, we received a link to make a payment hosted through the domain “remakerai.me”. Similarly, a mock purchase on RefacePorn pointed us to a payment link on “airemaker.me”. Bellingcat has observed the use of redirects, which can be used to obscure payments, by other deepfake porn sites. Many payment processors, including Paypal and Stripe, have restrictions on buying or selling sexually oriented online content.

SiteAdminPaymentProcessorRedirectSiteAnotherRedirect SiteDeepfakeSiteSiteUser

Payment processors often block payments that come from websites making deepfake pornography.

Using a redirect site hides the original site from the payment processor, making it harder to block.

Despite this, payment processors sometimes manage to block the redirect site.

But If one redirect site is blocked, the site owner can quickly switch to another redirect site that isn’t blocked.

Graphic: Galen Reich

The redirected payment links hosted on airemaker.me and remakerai.me offered several payment options including Paypal, credit cards and cryptocurrencies. Bellingcat selected the credit card option, and in both cases was emailed a link to complete the purchase on a payment platform called Peerwallet. This email included a link to the seller’s profile, Dorocron LLP. 

This profile showed the funds received by the seller, which totalled more than $331,000 as of August 2025. This income was related to 16,264 sales. According to this dashboard, Dorocron LLP had been a member of Peerwallet since July 22, 2024, meaning these sales all occurred over the past year.

Screengrab of Peerwallet profile for Dorocron LLP, showing about US$331,000 in funds received for sales 

RefacePorn has been active since at least May 2022, according to promotional posts by an Instagram account with the username “Dorocron2323” and the account name “Hassler Mark”. Social media accounts for RefacePorn were also created on X and Facebook in May 2022.

Screengrab of an Instagram post from May 2022 promoting RefacePorn’s website, which is now down. Blurring by Bellingcat

While the transactions on Peerwallet were not broken down by domain, two were the payment redirect sites for the deepfake porn sites we investigated. Bellingcat’s review of the 21 “approved domains” listed on this profile found no evidence that payments were ever accepted through the other sites. 

Short-lived, “disposable” domains are known to be used by bad actors to evade detection, presenting a moving target for payment processors and authorities. As of publication, both airemaker.me and remakerai.me are no longer accessible. But in the course of the investigation, we observed RefacePorn and DeepfakePorn’s payment links redirecting to other third-party sites, before the sites went offline.

The Peerwallet profile showed transactions by users, as well as 21 approved domains including those redirecting payments for RefacePorn (refaceporn.com) and DeepfakePorn (deepfakeporn.app)

Of the 21 domains on Dorocron LLP’s Peerwallet profile, only two were still accessible as of the end of November, with the rest either down due to expired domains or server issues, displaying generic domain parking pages, or requiring a login to view. Though almost all of the sites had their registration information redacted, Resan was listed as the most recent registrant for one of the expired domains.

The two sites still accessible listed a variety of products, including eBooks and digital products. Both had almost identical products and templates, and listed Dorocron LLP under their company information in their footers. 

Bellingcat tried to check out items on each of the sites, and in both cases was prompted to log in. It was, however, impossible to register an account, and when we tried with an active email address we were redirected to a login page saying that the email address was “unknown”. 

Archived screengrabs of some of the sites that now have expired domains or require a login to view showed that many of them followed the same format, selling eBooks and video courses with “resell rights”.

Peerwallet told Bellingcat in September that Dorocron LLP was “not approved” to sell deepfake porn, and that it was looking into the issue. However, when Bellingcat asked for an update in November, Peerwallet appeared to have closed down. Emails to the payment processor’s founder have also gone unanswered. 

The Man Behind the Screen

Dorocron LLP was registered in British Columbia, Canada in March 2022. We were unable to verify if Resan’s name was on the corporate records as information on company owners or directors in British Columbia is restricted to law enforcement and other officials. 

However, Resan’s name has been used to register at least 13 sites alongside an email bearing  Dorocron’s name from as far back as 2013, nine years before Dorocron was registered in Canada. The earliest domain registration, from 2013, included the name of a now-dissolved UK-registered company called “Webnaser LTD”, whose registration documents also cite Resan as the sole director

WHOIS history information for a site that Resan first registered in 2013. Source: Whoxy

A leak found on data breach site Intelx.io shows that an almost identical password (with different capitalisation of some letters) was used to log into this “dorocron” Gmail account and a Netflix account associated with Resan’s personal email address. This password was also used to log into web domain registry GoDaddy using RefacePorn’s support email address. 

Leaked passwords on Intelx.io revealed another link between Resan and DeepfakePorn: an email with the username “resanmark” was used to log into DeepfakePorn’s website, with a password containing his birth year. In all, we found four unique passwords that were reused between Resan’s personal emails, the Dorocron emails, and a support email for RefacePorn. These four passwords include either Resan’s name or the date or year of his birth. 

Resan also posted two job listings from his now-deleted LinkedIn account about a year ago, for a full-stack web developer and a WordPress developer at Dorocron LLP. In the web developer listing, he described the company as “developing and applying revolutionary AI technologies” and said the job would have “high wages”. We could not find any other individual with a public association to Dorocron LLP on LinkedIn or elsewhere.

Support Bellingcat

Your donations directly contribute to our ability to publish groundbreaking investigations and uncover wrongdoing around the world.

Aside from his links to Dorocron LLP, Resan is also the sole director and person with significant control of Facitic Ltd, a UK-registered company which was listed as the registrant for RefacePorn. 

Using DomainTools, we were able to see the historical registrant information in a WHOIS lookup of the site’s domain registration. When we checked this in August 2025, we were able to see that, as of June 2025, Facitic Ltd was the registered owner of RefacePorn. This information was later redacted – as it is for other sites linked to Resan such as DeepfakePorn. 

ICANN, which regulates websites, requires domain name providers to verify the accuracy of their customers’ details, including the registrant's name and contact details. Such details are publicly visible by default, but can be anonymised using paid privacy services

The UK registration for Facitic Ltd lists Resan’s country of residence as Dubai, while the registration for another UK company he registered – which was also listed as the owner of some of the now-expired approved domains on Dorocron LLP’s Peerwallet profile – states that he resides in Cyprus. Meanwhile, Resan’s social media accounts stated that he lives in Hungary. On Peerwallet’s dashboard, the primary user of Dorocron is listed as being based in Hungary. 

It is unclear if Resan actually holds positions in any of the six companies he listed himself as working at on his Facebook and LinkedIn profiles. Bellingcat has reached out to these companies to check, but has not received any replies as of publication. 

Some of the connections Bellingcat found between RefacePorn and Mark Resan:

Graphic: Galen Reich

On Nov. 10, 2025, a few weeks before we contacted him, Resan applied for Facitic Ltd to be struck off the UK companies register. Based on Resan’s filings, Facitic Ltd was incorporated with an initial capital of £100 in January 2024, and there has been no recorded change in its accounts since. 

This comes as UK regulator Ofcom cracks down on websites associated with UK businesses offering AI-powered nudify services. On Oct. 23, Ofcom imposed a £50,000 fine on UK-registered company Itai Tech Ltd, which has been linked to some of the biggest deepfake pornography sites in the world, for failing to prevent children from accessing pornographic content. 

It is unclear what triggered Resan to file to dissolve the company, and he did not respond to Bellingcat’s query about this. 

Small Sites, Big Harm

The websites linked to Resan are not among the largest in the deepfake porn industry. A similar but much larger site that Bellingcat has investigated, MrDeepFakes, received millions of visits each month. Bellingcat and its partners Tjekdet, Politiken and CBC exposed the site’s key administrator David Do in May, with MrDeepFakes going offline after we reached out to Do for comment. 

In comparison, RefacePorn and DeepfakePorn received about 91,000 and 154,000 visits in October, according to digital marketing platform SemRush. But their smaller size does not mean they can’t cause significant harm. 

Mantzarlis, of the news site Indicator, said there were “smaller players” taking bigger risks around regulation, such as “Crush AI”, a group of Chinese-owned apps that bypassed Meta’s moderation rules to run 25,000 ads on Facebook and Instagram before the social media giant sued them. 

“These smaller players are often the ones that are more actively trying to stand out on social media to catch up with the bigger ones,” Mantzarlis said.

In the course of our investigation, we ran tests using the free features on RefacePorn to determine if there were any restrictions on images that could be uploaded on the website. 

Without actually generating the content, we uploaded AI-generated images of adult women and underage girls. Unlike on other websites we have tested, which have added the bare minimum of checks to prevent uploading images depicting children, there was no restriction or evidence of age-related safeguards on RefacePorn. 

While there aren’t laws in Hungary explicitly prohibiting deepfake porn, the possession, creation and distribution of sexually explicit images of minors is illegal

“As the more established websites come under sustained regulatory pressure and others get litigated into oblivion, the minnows are ready to try and capture market share,” Mantzarlis said. 

And while some sites such as RefacePorn and DeepfakePorn may fold in the face of public scrutiny, others continue to operate, unchecked and easily accessible, online. 

“These websites are eminently replaceable and there's no reason to believe that there is any form of ‘brand loyalty’,” Mantzarlis said. “Perpetrators are going to search for ‘nudify’ or click on an ad and go to whatever tool does the job.”


Melissa Zhu contributed to this report.

Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here and Mastodon here.

The post Profiting From Exploitation: How We Found the Man Behind Two Deepfake Porn Sites appeared first on bellingcat.

❌
❌