Back
AI & AutomationNovember 2, 20256 min read

New EU AI-Rules: Why Digital-Service Firms Need to Pivot Fast

OA

Oscar Arson

CTO & Co-Founder

The AI landscape is shifting fast. It’s no longer just about building models and deploying chatbots. The regulatory environment is evolving in parallel — and for a digital-services firm like Authect, that matters deeply. Recently, the European Data Protection Supervisor (EDPS) published revised guidance on generative AI and the EU AI Act’s mechanisms for incident reporting are being shaped into real-world obligations.
If you are offering AI integration, cybersecurity, website/app development, you must pay attention now. Because infrastructure, models and services may soon carry liability, compliance and risk-management burdens that were previously optional.

What’s Changed: Key Regulatory Updates

1. Revised EDPS Guidance on Generative AI
On 28 October 2025 the EDPS issued updated guidelines aimed at EU institutions, bodies and agencies for their use of generative AI. The new version:

  • Provides a refined definition of generative AI for clarity.

  • Offers a practical compliance checklist for assessing lawfulness of processing.

  • Clarifies roles: who is controller, processor, joint-controller in generative AI workflows.
    In short: the regulator is moving from “framework thinking” to “here is what you must do” for generative AI.

2. Draft Guidance on Reporting Serious AI Incidents
The European Commission published draft guidance (late October 2025) for how to report “serious incidents” under Article 73 of the EU AI Act.
Key points:

  • A “serious incident” can include indirect causal link between AI system and harm (e.g., a flawed model leading to wrongful loan denial).

  • Providers must notify national market-surveillance authorities promptly if they deploy “high-risk” AI systems.

  • The draft sets templates and clarifies how this interacts with existing obligations (e.g., critical infrastructure under NIS-2).

  • Non-compliance risk: fines up to €15 million or 3% of global turnover in some cases.

  • Deadline for comments is 7 November 2025.

3. Broader Regulatory Landscape
Further context:

  • The EU is rolling out its digital sovereignty push: more AI testing facilities, “AI factories” and regulatory sandboxes.

  • For companies deploying AI, especially “foundation” or “general-purpose” models, the rules are getting concrete.

Why This Matters to Authect (and Our Clients)

As a digital-services firm delivering website design, cybersecurity, AI integration and app development — especially targeting Dubai and the MENA region — here’s how these regulatory shifts impact us:

  • Infrastructure & Model Selection Becomes a Compliance Decision
    When you embed an AI-model into a client’s system, you’re no longer only worrying about performance or UX: you must ask: Is this considered “high-risk”? Am I the provider or deployer under the regulation? What controls must be in place? The draft guidance spells it out.

  • Liability & Risk Management are Front and Centre
    If an AI system you deploy causes serious harm (even indirectly) you may need to report it. Failing to do so can involve heavy fines, remediation, reputational damage. For clients in regulated industries (finance, healthcare, critical infrastructure) this will be a board-level concern.

  • Cybersecurity + Governance Gains Premium Importance
    With increased regulatory focus, you can position Authect as not just a developer but a full-stack advisor: integrating AI, securing it, governing it. We already offer cybersecurity services; we should upsell “AI governance readiness” as a package.

  • Middle-East / Dubai Opportunity
    Many businesses in the region look to Europe (and the global market) as benchmarks. As companies increasingly face regulation, they will seek external vendors who understand compliance, not just coding. Authect can claim that knowledge.
    Also: if European standards become a de-facto global standard (as often happens), being ahead of the curve is a differentiator.

  • Marketing / Thought-Leadership Angle
    There is a great content opportunity here. Blogs, webinars or LinkedIn posts addressing “How the EU AI Act affects you in GCC”, “What Saudi/UAE firms must know about high-risk AI systems”, “Integrating generative AI safely in your stack”. This reinforces your brand as a forward-thinking, compliance-aware partner.

Action Plan for Authect

Here are concrete steps we should take in the next 30-90 days:

  1. Audit our service catalogue:

    • Identify which of our services intersect with “high-risk AI” (e.g., model deployment, critical-infrastructure clients, decision-support systems).

    • Ensure our contracts and proposals include language about regulatory compliance, incident-reporting obligations, roles (provider vs deployer).

    • Create internal checklist based on EDPS guidance for generative AI.

  2. Develop “AI Governance Readiness” offering:

    • Build a template (or review service) that examines: model choice, data sourcing/training, roles/responsibility, documentation, monitoring, incident-response plan.

    • Position this as an upsell for clients building AI-enabled services.

    • Link it with our cybersecurity offering (penetration testing of AI pipelines, ethical/harm assessment).

  3. Content & Outreach Calendar:

    • Publish blog posts and LinkedIn-posts around:

      • “What the new EDPS guidance means for SMEs”.

      • “High-risk AI – what Dubai firms must know”.

      • “Are you the ‘provider’ or ‘deployer’? Why that matters under the EU AI Act”.

    • Use Authect’s colour (#1E5AFF) and fonts (Inter) and include footer “Powered and Built by Authect”.

    • Share via LinkedIn CEO persona and Authect company page.

  4. Training & Capability Building:

    • Ensure Pranas and Oscar (and any partners) refresh their understanding of the EU AI Act & the draft guidance.

    • Consider 1-hour “lunch & learn” for our team on incident-reporting obligations under Article 73.

    • Map out how our stack (React, TS, Tailwind, self-hosting) intersects with AI system lifecycle (data ingestion → model → UI → deployment → monitoring).

  5. Client Engagement:

    • In current proposals, add a section “Regulatory readiness” for AI-enabled projects.

    • For clients already using/considering AI, proactively suggest a “compliance check”.

    • Use the regulatory shift as a sales window: “Many clients will soon ask ‘is this AI system compliant?’ — let’s help you answer that now.”

Key Messages for Clients & Prospects

  • “AI is no longer just about innovation — it’s about accountability.”

  • “The baseline compute & models might be commoditised; your differentiator will be governance, security, localisation.”

  • “If you build an AI system and it causes harm, even indirectly, you may need to report the incident — and that means your vendor matters.”

  • “In the Middle-East, with clients looking globally, being aligned to EU-grade standards can be a trust-signal.”

  • “Authect isn’t just building websites or apps; we’re delivering AI-enabled services that meet modern regulatory expectations.”

Conclusion

The regulatory tide is turning. For digital-service firms that treat AI as just another module, this shift could mean surprise liability, mis-alignment, and reputational risk. For firms like Authect — who already emphasise modern tech stacks, security and global readiness — this is an opportunity to elevate our positioning: we don’t just build. We integrate. We secure. We govern. We future-proof.

As we expand into Dubai, the Middle East, and serve clients across sectors, let’s build our narrative now: “AI-integration done right, with governance, transparency and regional execution.” Because the rules aren’t coming — they’re here. And being ahead of them will matter.

Share this post