Adoption of AI - Laws, trends and what's next
AI is transforming the world—but with great power comes great responsibility. In episode 2 of SpeakUp Talks, Robert (Legal Director) and Arun (CTO & Tech Expert) break down the upcoming EU AI Act, ethical responsibilities and what it means for companies, compliance, and the future of AI.
Episode summary
In this episode, legal director Robert Greco sits down with CTO Arun Yadava to chart a pragmatic path through AI’s rapid evolution and the EU AI Act’s emerging guardrails. The conversation starts with the obvious: models have leapt from single-purpose tools to versatile systems that can summarize, structure information, and translate even niche dialects—capabilities that meaningfully reduce repetitive work for compliance teams. Yet power raises stakes, and both hosts emphasize that adoption must be paired with clarity on purpose, people impact, and oversight.
They frame the law as a risk-tiered blueprint rather than a brake. High-risk and prohibited uses signal clear “do/don’t” lines; most corporate use cases will sit in the limited- or minimal-risk buckets, provided teams design for privacy, fairness, and transparency. Arun notes that guardrails are visible in real products: output limitations and copyright filters have tightened over time, a tradeoff that sometimes reduces raw capability but meaningfully improves safety and compliance readiness. The takeaway: constraints are features when they build trust.
Bias and opacity get special attention. General-purpose models can be opaque, and data gaps can amplify unfairness. Rather than chase full automation, SpeakUp’s product philosophy is human-in-the-loop: use AI to make strong, well-sourced suggestions and show the quotes or references that informed them so case handlers can decide. Automated actions based on AI outputs—especially where people are affected—remain off-limits; humans make the calls, and the system makes the reasoning traceable.
The team also compares regional postures. The EU may feel restrictive compared with the U.S. or China, but being first to codify “good use” helps vendors converge on safer defaults—useful for buyers who must answer legal, ethical, and reputational questions. Practically, that means choosing model providers with credible assurances, understanding where data flows, and validating that safety updates aren’t just promises but visible in behavior.
Finally, they peek ahead: real-time, multi-modal models (including audio-to-audio) are arriving, and the team is prototyping where they truly help—faster intake, better multilingual experiences, and less drudgery in investigation workflows—without crossing the line into automated decisions. The result is a balanced adoption playbook: let AI handle repetitive tasks and language work, keep humans responsible for judgments, and design transparency into every step so employees, investigators, and regulators can trust the outcome.
Trusted by 600+ enterprises operating worldwide





