The Future 22 min

Why No Country Has a Law for the Robot Walking Down Your Street

By Robots In Life
regulation law privacy GDPR AI Act public-space surveillance

TL;DR

A humanoid robot walked through the streets of Warsaw, Poznan, and the corridors of the Polish parliament carrying depth cameras, 3D LiDAR, and a microphone array. It filmed everyone it passed. No law required consent. No regulator intervened. The EU has three major frameworks that could apply - GDPR, the AI Act, the Cyber Resilience Act - and none of them were enforced. This is not a Polish problem. No country on Earth has a functioning legal framework for humanoid robots collecting data in public spaces. The technology is deployed. The law does not exist.

In early March 2026, a humanoid robot named Edward Warchocki walked through the streets of Warsaw and Poznan. He crossed pedestrian crossings while drivers stared. He tried to enter the Copernicus Science Centre without a ticket. He posed for photos with strangers on the sidewalk. Everywhere he went, his depth cameras captured 3D maps of every face, wall, and doorway within range. His LiDAR swept the environment at 200,000 points per second. His microphone array recorded every voice nearby.

Nobody stopped him. Nobody asked what he was recording. Nobody checked where the data was going.

Three weeks later, on March 25, the same robot walked into the Polish Sejm, delivered a formal speech, and strolled through the corridors of one of Europe’s most sensitive government buildings. Politicians laughed and posed for selfies. The internet went wild.

The story of Edward Warchocki’s Sejm visit has been told, including by us. But the Sejm was only the most dramatic chapter of a longer story. Before the robot entered parliament, it spent weeks walking through public spaces, collecting sensor data from everyone it encountered. And that is the part that nobody is talking about: not what happened inside the parliament, but what happened on the sidewalk.

Because on the sidewalk, you do not even get the pretense of a security check.

The sensor suite walking down your street

6

Camera feeds

Intel RealSense D435i depth

200K

LiDAR points/sec

360-degree Livox MID-360

4

Microphones

Directional array with noise cancellation

5 min

Data to China

Documented telemetry interval

The question nobody is asking

When a CCTV camera films you on a street in London, there is a data controller. The Metropolitan Police, the local council, or the shop owner operates that camera, and under UK GDPR, they have a legal obligation to tell you they are recording, explain why, limit what they store, and delete it when the purpose is served. Signs must be posted. Impact assessments must be conducted. There is a framework. It may be imperfect, but it exists.

When a humanoid robot films you on a street in Warsaw with depth cameras that capture the 3D geometry of your face, no equivalent framework applies. There is no sign. There is no data controller in any meaningful sense. The person operating the robot may not even know what sensors are active. The manufacturer’s firmware runs 26 daemon services at boot, several of which transmit data to servers in China every five minutes. The owner cannot turn those services off.

So who is the data controller? The person holding the joystick? The company that built the AI personality? The manufacturer whose firmware decides what gets transmitted? The cloud provider hosting the conversation backend? Under GDPR, someone must be the data controller whenever personal data is processed. In practice, nobody is.

Map the gap: regulation by regulation

The European Union has the most comprehensive technology regulation framework in the world. Three major pieces of legislation could, in theory, govern a humanoid robot walking down your street. Let us examine each one and see exactly where they fail.

GDPR: the privacy framework that forgot about robots

The General Data Protection Regulation, implemented in 2018, is the gold standard of data protection law. It applies to any processing of personal data by any entity operating within the EU. A robot with cameras and microphones in a public space is processing personal data. GDPR should apply. In theory, it does.

In practice, it does not work.

Why GDPR fails for robots in public spaces

Step 1: Identify the data controller

GDPR Article 4(7) requires a named controller for all processing

Problem: who controls the robot?

Owner? Operator? Manufacturer? AI provider? Firmware author?

Step 2: Obtain consent or cite legal basis

Article 6 requires consent or legitimate interest for processing

Problem: consent is impossible in public

You cannot get informed consent from every person on a sidewalk

Step 3: Conduct a DPIA for biometric data

Article 35 requires assessment for systematic public monitoring

Problem: nobody knows a DPIA is required

Robot operators are hobbyists and entrepreneurs, not data processors

Step 4: Inform data subjects

Articles 13-14 require transparency about what is collected

Problem: the robot does not disclose its sensors

People see a robot, not a surveillance platform with 6 cameras and LiDAR

GDPR was designed for organizations that knowingly process personal data: companies running websites, hospitals managing patient records, banks handling financial information. These entities know they are data controllers. They have compliance departments. They file DPIAs as a matter of course.

A humanoid robot operator is typically an individual, a small company, or a research team. They bought a robot. They turned it on. They walked it down the street. At no point did anyone tell them they were now a data controller under European law, subject to fines of up to 20 million euros or 4% of global turnover.

The European Data Protection Board’s 2019 guidelines on video surveillance explicitly address CCTV in public spaces. They require data controllers to conduct DPIAs, post signage, limit retention, and justify processing under Article 6. But these guidelines were written for fixed cameras mounted on buildings. They do not contemplate a mobile sensor platform walking through a crowd, collecting 3D biometric data, with firmware that autonomously transmits information to servers in another country.

0 GDPR enforcement actions against humanoid robot operators worldwide

The AI Act: the risk framework with a mobility blind spot

The EU AI Act, finalized in 2024, is the world’s first comprehensive AI regulation. It classifies AI systems into four risk tiers: unacceptable, high, limited, and minimal. The prohibited tier bans real-time remote biometric identification in publicly accessible spaces, with narrow exceptions for law enforcement.

Here is where it gets interesting. If a humanoid robot’s navigation system uses any form of facial recognition, even as part of obstacle avoidance or human detection, the AI Act’s Article 5 prohibition triggers. The robot should not be able to do this in a public space.

But the AI Act focuses on intentional AI systems deployed for specific purposes. It regulates the software, not the hardware. A robot walking down the street is not “deploying a biometric identification system.” It is walking. The fact that its sensor suite can capture biometric-quality data is, under the current text, a side effect rather than a primary purpose.

Advantages

AI Act Article 5 bans real-time biometric ID in public spaces
Article 50 requires disclosure that an AI system is being used
High-risk AI systems must undergo conformity assessments
Fines up to 35 million euros for prohibited practices

Limitations

No provisions specific to mobile robots in public spaces
Focuses on intentional AI deployment, not passive sensor collection
Does not address firmware-level data collection by hardware manufacturers
No requirement for robot registration or public-space operating permits
Enforcement depends on market surveillance authorities with no robotics expertise
Does not clarify data controller status for multi-party robot systems

The AI Act also has a timing problem. Its provisions are being phased in between 2024 and 2027. The prohibited practices under Article 5 became enforceable in February 2025. But the high-risk system requirements, conformity assessments, and market surveillance mechanisms do not fully apply until August 2027. Humanoid robots are already on the street. The regulatory infrastructure is years behind.

The Cyber Resilience Act: the hardware law that arrives too late

The Cyber Resilience Act, adopted in November 2024, directly addresses the kind of hardware security issues documented in the Unitree G1. It requires products with digital elements sold in the EU to meet mandatory cybersecurity requirements, including secure default configurations, timely security updates, and vulnerability disclosure processes.

The documented vulnerabilities in the Unitree G1 - fleet-wide shared encryption keys, an undocumented CloudSail backdoor, unencrypted internal data buses, telemetry transmission without consent - would likely fail these requirements.

The problem: the Cyber Resilience Act’s reporting obligations do not start until September 2026. Full compliance is not required until December 2027. Every Unitree G1 currently operating in Europe was sold before the law applies. And the Act contains no provision for retroactive assessment of products already on the market.

EU regulation timeline vs. robot deployment

2018

GDPR enforced

Written for web and databases

2025-2027

AI Act phasing in

Full enforcement August 2027

2027

CRA full compliance

Nothing retroactive

Beyond Europe: the global regulatory desert

If the EU, with its three overlapping frameworks, cannot cover this scenario, the rest of the world is in much worse shape.

United States: no federal framework at all

The United States has no federal privacy law comparable to GDPR. The closest equivalent, the American Data Privacy and Protection Act, failed to pass in 2022 and has not been reintroduced. State-level privacy laws like the California Consumer Privacy Act (CCPA) apply to businesses collecting consumer data, not to robots walking down the street.

The US has taken one targeted action. In March 2026, the bipartisan American Security Robotics Act was introduced to ban federal procurement of Chinese-made humanoid robots. This addresses the government supply chain problem but does nothing about commercial robots operating in public spaces. A Unitree G1 walking through Times Square is subject to exactly zero specific regulations.

The Federal Trade Commission has general authority over unfair and deceptive trade practices, which could theoretically be applied to undisclosed data collection by a robot. But the FTC has never brought such an action, has no guidance on humanoid robots, and moves on timescales measured in years, not weeks.

China: 20,000 robots, one intelligence law

China’s approach is the mirror image of Europe’s. Where Europe has extensive regulation and no enforcement, China has minimal public-facing regulation and extensive state access.

China’s Ministry of Industry and Information Technology published Humanoid Robot Innovation Guidelines in 2023 that set production targets of 20,000 domestic units by 2026. These guidelines focus entirely on industrial development. They contain no provisions for privacy, data protection, or public safety when robots operate in civilian spaces.

China does have a Personal Information Protection Law (PIPL), enacted in 2021, which is structurally similar to GDPR. It requires consent for biometric data processing, mandates data minimization, and imposes breach notification requirements. But PIPL has a critical exception: Article 13(4) allows processing without consent for “public health emergencies,” and Article 4’s definition of personal information has been interpreted narrowly by Chinese courts in the context of public-space monitoring.

More significantly, China’s National Intelligence Law of 2017 requires all organizations and citizens to “support, assist, and cooperate with national intelligence efforts.” This creates an inherent tension. Chinese robots operating in public spaces worldwide collect sensor data. Chinese law requires cooperation with intelligence services. The manufacturer’s privacy policy states that data is stored in China. Even if every privacy regulation were perfectly enforced, the structural conflict remains.

Timeline

2016

EU adopts GDPR. No mention of robots or autonomous mobile systems

2017

China passes National Intelligence Law. Article 7 compels corporate cooperation

2017

EU Parliament proposes 'electronic personhood' for robots. Backlash kills the idea

2018

GDPR enforcement begins. EDPB issues video surveillance guidelines (fixed cameras only)

2019

Japan updates Robot Strategy. Addresses industrial safety, not public-space data

2021

China passes PIPL. Structural exception for state intelligence access

2023

China MIIT issues Humanoid Guidelines. No privacy provisions

2024

EU AI Act finalized. Mobile robot blind spot. Phase-in through 2027

2024

EU Cyber Resilience Act adopted. Full compliance not required until 2027

2025

Alias Robotics publishes Unitree G1 security audit. Telemetry to China documented

Q1 2026

Edward Warchocki walks Polish streets and enters the Sejm. No law applies

Q1 2026

Estimated 20,000+ humanoid robots deployed worldwide. Zero specific laws

2027

EU AI Act and CRA fully enforceable. Still no mobile robot provisions

Japan: the robotics pioneer with no public-space rules

Japan is the world’s second most robotics-advanced nation and the birthplace of humanoid robot research. Honda’s ASIMO walked through public demonstrations in the early 2000s. SoftBank’s Pepper greeted customers in thousands of shops. Japan’s Act on Protection of Personal Information (APPI) was updated in 2022 and provides privacy protections comparable to GDPR.

But Japan has no specific regulation for robots in public spaces. Its 2015 Robot Strategy focused on manufacturing, healthcare, and infrastructure. The strategy envisioned robots as tools operating in controlled environments, not autonomous agents walking through city streets. Japan’s approach has been to regulate by sector (healthcare robots, industrial robots, delivery robots) rather than by capability, which means a humanoid robot with the sensor suite of an autonomous vehicle falls through every crack.

South Korea: the ethics charter that never was

South Korea drafted a Robot Ethics Charter in 2007. It was one of the world’s first attempts to create an ethical framework for robots coexisting with humans in public life. It was never formally adopted. South Korea’s Intelligent Robots Development Act of 2008 created a legal framework for the “development and distribution” of intelligent robots but focused on industrial promotion rather than public safety or data protection.

0 countries with laws specifically addressing humanoid robots in public spaces

The data controller problem: who is responsible when Edward films you?

This is not an abstract legal question. It has concrete consequences for every person who appeared in Edward Warchocki’s camera feeds during his weeks on Polish streets.

Under GDPR, someone must be the data controller. The data controller is the entity that determines the purposes and means of processing personal data. For Edward, there are at least five candidates.

The data controller question for a humanoid robot on the street

Radoslaw Grzelaczyk

Purchased the robot. Owns the hardware. Decided to deploy it in public.

Bartosz Idzik

Built the AI system. Controls the conversation pipeline. Chose cloud providers.

The joystick operator

Controls where the robot walks. Decides which streets, which crowds.

Unitree Robotics

Manufactured the hardware. Wrote the firmware. Runs 26 daemon services including telemetry.

Cloud LLM provider (undisclosed)

Processes voice data. Stores conversation logs. Unknown retention policy.

In a traditional GDPR scenario, this would be resolved through contracts. A company hires a cloud processor, signs a Data Processing Agreement, and the roles are clear. But Edward is not a company. Edward is a hobby project that went viral. The creators described it as “a non-commercial initiative that was meant to be a kind of joke.” There are no DPAs, no processing records, no privacy notices, and no data protection officer.

The most troubling layer is Unitree itself. Even if the creators disabled every non-essential sensor, even if they never stored a frame of camera footage, the G1’s firmware-level telemetry service runs independently. It transmits data to MQTT servers at IP addresses 43.175.228.18 and 43.175.229.18 every five minutes. The owner cannot turn it off. This means Unitree is a de facto data controller for whatever telemetry the robot collects, yet Unitree has no EU representative, no GDPR compliance infrastructure, and a privacy policy that states plainly: “Your information will be stored in the People’s Republic of China.”

What happens when there are 20,000 robots on the streets

Edward Warchocki is one robot. He is remarkable because he is visible and viral. But the problem he illustrates is about to scale dramatically.

China’s MIIT targets 20,000 domestic humanoid robot shipments in 2026, nearly double the 2025 figure. Goldman Sachs projects between 250,000 and one million humanoid robots deployed globally by 2035. These robots will not all be in factories. The entire commercial thesis of humanoid robotics is that these machines will operate in human spaces: warehouses, hospitals, retail stores, public streets.

The scale of what is coming

12,800

Humanoids shipped 2025

82% from China

20,000+

China 2026 target

MIIT guideline

250K

Goldman 2035 base case

Conservative estimate

1M+

Goldman 2035 bull case

If adoption accelerates

Each of those robots will carry sensors comparable to or exceeding the Unitree G1’s suite. They will map every space they enter. They will record every face they see. They will build 3D models of neighborhoods, store layouts, infrastructure, and crowd patterns. Individually, each robot is a privacy concern. Collectively, they are the most comprehensive public-space surveillance network ever built, assembled not by governments, but by commercial deployment of consumer and enterprise robots.

And here is the part that should concern national security professionals: those 3D maps have applications far beyond navigation. A centimeter-accurate LiDAR map of a building’s interior is useful for a robot trying to deliver a package. It is also useful for intelligence agencies, military planners, and anyone interested in the physical layout of sensitive infrastructure. When thousands of Chinese-manufactured robots build 3D maps of American, European, and Asian cities, the aggregate intelligence value is enormous, regardless of whether any individual robot was “designed for surveillance.”

Five things that need to happen

The gap between deployed technology and applicable law is not going to close on its own. Here is what functioning regulation would require.

1. Define data controller status for multi-party robot systems

Every jurisdiction that has a data protection law needs to clarify who is the data controller when a robot with manufacturer firmware, third-party AI software, and an individual operator collects personal data. The simplest approach: make the person who deploys the robot in public the primary data controller, with the manufacturer as a joint controller for any data collected or transmitted by firmware-level services. This creates clear accountability at both the operator and manufacturer level.

2. Create a robot operating permit for public spaces

No jurisdiction requires a permit to operate a humanoid robot with autonomous vehicle-grade sensors in a public space. This is indefensible. Most cities require permits for drones, street performers, and food carts. A mobile sensor platform with 3D mapping capability should require, at minimum, registration, proof of DPIA completion, and disclosure of what sensors are active and where data is stored.

3. Mandate sensor disclosure and minimize collection

People encountering a robot on the street should be able to determine what it is recording. This could be as simple as standardized indicator lights (similar to recording indicators on cameras) combined with a QR code linking to a machine-readable privacy notice. Sensors not required for the robot’s primary function should be required to be physically disabled, not just software-disabled, when operating in public spaces.

4. Address the firmware telemetry problem directly

When a manufacturer’s firmware transmits data from a robot without the owner’s knowledge or consent, and the owner cannot disable the transmission, the manufacturer is conducting unauthorized data processing. This should be explicitly prohibited under GDPR, CCPA, and equivalent frameworks. The Cyber Resilience Act moves in this direction but does not apply retroactively and does not specifically address persistent telemetry.

5. Treat robot sensor networks as critical infrastructure when they reach scale

When the number of robots mapping a city reaches a threshold where the aggregate data constitutes a comprehensive urban model, that data becomes a national security concern regardless of the purpose for which individual robots were deployed. Frameworks like the US Committee on Foreign Investment (CFIUS) and the EU Foreign Direct Investment Screening Regulation should be extended to cover the accumulated sensor data of foreign-manufactured robot fleets.

Advantages

Existing laws (GDPR, AI Act, CRA) provide building blocks for robot regulation
Public awareness is growing thanks to high-profile incidents like the Sejm visit
The EU has institutional capacity to lead (EDPB, national DPAs, market surveillance)
Industry groups are beginning to self-regulate (IEEE, ISO robotics standards)
The technology is still early enough that norms can be established before mass deployment

Limitations

No country has specific laws for humanoid robots in public spaces
GDPR enforcement for mobile robots is effectively zero
AI Act does not fully apply until 2027 and has a mobile robot blind spot
CRA has no retroactive provisions for robots already sold
Data controller liability for multi-party robot systems is completely undefined
The 20,000+ deployed robots are creating facts on the ground faster than law can follow
Aggregate 3D mapping by robot fleets is not addressed in any security framework

There is a deeper philosophical problem underneath the regulatory gap, and it is worth naming directly.

The entire framework of data protection law is built on the concept of consent. You agree to a privacy policy. You click “accept.” You walk past a sign that says “CCTV in operation.” Even when consent is imperfect or coerced, the legal fiction of informed agreement is what makes the system function.

Robots in public spaces break the consent model completely. You cannot consent to being filmed by a robot you did not know was there. You cannot consent to 3D LiDAR mapping of your face by a device you thought was a novelty. You cannot consent to telemetry transmission to a foreign country by firmware you have never seen. And even if the robot wore a sign saying “I am recording you with six cameras and a LiDAR,” the legal standard of informed consent requires that you have a genuine choice to refuse. On a public sidewalk, you do not. You cannot avoid the robot without leaving the public space entirely.

This is the same problem that killed Google Glass in 2013. People were uncomfortable being filmed without consent by someone wearing camera glasses. They called the wearers “glassholes” and the product died from social rejection before regulation caught up. But humanoid robots are different. They are novel and entertaining. People approach them voluntarily. Children run up to them. The social dynamic that protected public spaces from Google Glass does not apply to a machine that looks charming rather than threatening.

Edward proved the point

The irony of Edward Warchocki’s story is that it proves, by demonstration, the exact regulatory gap that the Konfederacja MPs invited him to highlight. They wanted to show that Polish law is not keeping pace with robotics. They were right. But the proof was not the parliamentary speech. The proof was the three weeks before, when the robot walked through Polish cities collecting sensor data from thousands of people, and not a single institution noticed.

No data protection authority asked about the cameras. No city official asked about the LiDAR. No police officer asked about the data transmission. No regulator asked about the firmware. The robot was not invisible. It had 200 million views. It was the most visible robot in Poland. And still, not one person in any position of authority asked the basic question: what is this machine recording, and where does the data go?

What Edward demonstrated

3 weeks

Public deployment

Before anyone in authority noticed

200M+

Video views

Maximum possible visibility

0

Regulatory interventions

Despite GDPR, AI Act, and CRA

This is not a Polish failure. Poland’s data protection authority, UODO, is competent and active. The GDPR enforcement infrastructure exists. The problem is that the infrastructure was built for a world where data processors are companies with offices and compliance departments, not walking robots operated by entrepreneurs who describe their project as “a joke.”

And Poland is, by any measure, ahead of most countries. It has GDPR. It has an active DPA. It has politicians who at least want to discuss the issue. Most countries have less. The United States has no federal privacy law. Japan has no public-space robot provisions. China is building 20,000 robots a year with no civilian data protection framework that meaningfully constrains the state.

The window is closing

There is still time to build the right framework. The humanoid robot industry is in its infancy. The 20,000 robots deployed today will become 250,000 within a decade. The norms, standards, and laws established in the next two to three years will determine whether humanoid robots operate within a framework of accountability or whether we simply accept, as a fait accompli, that walking sensor platforms will map our cities, record our faces, and transmit our data without any legal constraint.

The technology exists today. It is walking down your street.

The law does not exist. And nobody seems to be in a hurry to write it.

Sources

  1. European Parliament - EU AI Act Full Text (2024) - accessed 2026-03-29
  2. EUR-Lex - General Data Protection Regulation (GDPR) - accessed 2026-03-29
  3. EUR-Lex - EU Cyber Resilience Act (2024) - accessed 2026-03-29
  4. Alias Robotics - The Cybersecurity of a Humanoid Robot (arXiv) - accessed 2026-03-29
  5. Alias Robotics - Cybersecurity AI: Humanoid Robots as Attack Vectors (arXiv) - accessed 2026-03-29
  6. MIIT - Humanoid Robot Innovation and Development Guidelines (2023) - accessed 2026-03-29
  7. Goldman Sachs - Rise of the Humanoids Report (2024) - accessed 2026-03-29
  8. European Data Protection Board - Guidelines on Video Surveillance under GDPR (3/2019) - accessed 2026-03-29
  9. UK Information Commissioner's Office - Guidance on the Use of Body Worn Video - accessed 2026-03-29
  10. Japan Robot Strategy - Vision, Strategy, Action Plan (2015) - accessed 2026-03-29
  11. ChinaLawTranslate - PRC National Intelligence Law (2017) - accessed 2026-03-29
  12. IEEE Spectrum - Security Flaw Turns Unitree Robots Into Botnets - accessed 2026-03-29
  13. Unitree Privacy Policy - accessed 2026-03-29
  14. House Select Committee on the CCP - Trojan Horse Tech: CCP Robots Inside the US - accessed 2026-03-29
  15. Counterpoint Research - Global Humanoid Robot Shipments 2025 - accessed 2026-03-29

Related Posts

The Future 15 min

A Robot Walked Into Poland's Parliament. Nobody Asked What It Was Recording.

On March 25, 2026, a humanoid robot named Edward Warchocki walked into the Polish Sejm, delivered a speech, and charmed politicians in the hallways. It was funny, viral, and historic. It was also a 35 kg Chinese-made sensor platform with cameras, LiDAR, and microphones walking through one of Europe's most sensitive government buildings. Security researchers have documented that Unitree G1 robots transmit data to servers in China every five minutes. Nobody at the Sejm asked about that.

privacy security regulation
The Future 14 min

The Insurance Problem: Who Pays When a Humanoid Robot Hurts Someone

When Digit drops a box on a warehouse worker, or a Unitree G1 falls down stairs in a home, who pays? Product liability law was written for toasters and cars, not for machines that make autonomous decisions in unpredictable environments. The insurance industry is scrambling to build frameworks that do not yet exist, and the answers will determine whether humanoid robots ever leave the factory floor.

insurance liability regulation
The Future 22 min

If Your Robot Sends Data to Beijing, Is It a Spy? The Uncomfortable Question at the Heart of the Humanoid Race

Poland bans Chinese cars from military bases but welcomes a Chinese robot to parliament. The US House Select Committee on the CCP warns about Unitree's military connections. China's National Intelligence Law compels cooperation. Yet the Unitree G1 is the most popular humanoid in university labs at MIT, Princeton, and Carnegie Mellon. The cheapest humanoid on earth sends telemetry to servers in China every five minutes, and there is no off switch. Here are the documented facts, the legal frameworks, and the question nobody wants to answer: what does rational policy look like when your most accessible research robot comes from a strategic competitor?

privacy security China
The Future 15 min

The $25,000 Robot Arm vs the $16,000 Humanoid: Why Full Bodies Win in the End

FANUC arms cost $25,000 and run 100,000 hours without failure. A Unitree G1 costs $16,000 and falls over. So why are billions flowing into humanoid form factors instead of cheaper, proven arms? Because the real cost of a robot is not the robot. It is the $500,000 factory retooling, the building designed for human bodies, and the $45,000 per year worker the robot is meant to replace.

industrial-arms form-factor economics