

Cyber laws are rules pertaining to legal informatics that control software, e-commerce, digital information distribution, and information security. It typically addresses a wide range of related topics, including privacy, freedom of speech, and Internet usage and access.
Cyber laws shield users from becoming victims of online fraud. AI systems are categorized by risk level under the European Union’s AI Act, which will go into full effect in 2025. Strict transparency and auditing regulations apply to high-risk technologies like facial recognition in security.
The EU’s AI Act seeks to advance innovation and position Europe as a leader in the area while safeguarding fundamental rights, democracy, the rule of law, and environmental sustainability against high-risk AI.
Certain AI applications that threaten citizens’ rights are banned by the new law, such as biometric classification systems based on sensitive traits and the indiscriminate scraping of facial photos from the internet or CCTV footage to build facial recognition databases.
The 2025 revisions to China’s AI laws place a strong emphasis on state control and mandate security assessments for generative AI models.
India’s new AI governance regulations offer improved data privacy for regular users, more robust deepfake protections, and more transparent disclosures. The framework places a strong emphasis on accountability, safety, and transparency for all AI-powered services and apps. This represents a change toward safer, more reliable digital experiences for users of smartphones and the internet.
Banks, insurance companies, investment firms, and other financial entities can withstand, respond to, and recover from ICT (Information and Communication Technology) disruptions, such as cyber attacks or system failures, according to the Digital Operational Resilience Act (DORA), a regulation introduced by the European Union to strengthen the digital resilience of financial entities.
The proposed Artificial Intelligence (Regulation) Private Members’ Bill (AI Bill) in the UK calls for the creation of a “AI Authority” to supervise the regulatory approach to AI in the UK in accordance with certain regulatory principles, such as safety, fairness, and accountability. It also mandates that the AI Authority establish a program to engage the public in meaningful, long-term discussions about the opportunities and risks of AI and mandates that companies developing, deploying, or using AI must designate an AI officer to oversee the safe use of AI.
In an effort to prevent states from enacting their own AI laws, US president Donald Trump will sign an executive order this week creating a federal framework around the technology.
According to National Conference of State Legislatures, that over the past few years, more and more US states have introduced laws pertaining to artificial intelligence. All 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C. have proposed legislation on this subject in the 2025 legislative session. Approximately 100 laws were approved or passed by 38 states this year.
Middle East countries that make significant investments in AI research, development, and application include Saudi Arabia, the United Arab Emirates, Qatar, Bahrain, Oman, and Egypt.



