Securing Europe’s Digital Future.
Will Europe Lead or Fall Behind?
On Wednesday, 23 October, SME Connect organised a working breakfast titled “Securing Europe’s Digital Future. Will Europe Lead or Fall Behind?” in the European Parliament in Strasbourg.
The event was hosted by JÖRGEN WARBORN MEP, President of SME Europe of the EPP, Co-chair of the SME Circle of the EPP Group. The panel featured distinguished experts including TILL KLEIN, Head of Trustworthy AI at the Applied AI; ADAM BARTHA, Director at European Policy Information Center – EPICENTER; NATASCHA GERLACH, Director of Privacy Policy for the Center of Information Policy Leadership – CIPL; AYESHA BHATTI, Policy Analyst at ITIF Center for Data Innovation; GIORGOS VERDI, Policy Fellow at the European Council on Foreign Relations – ECFR; MARCO PANCINI, Head of EU Affairs at Meta. The event was moderated by DR. MICHAL BONI, First Minister of Administration and Digitalisation of Poland 2011-2013; Member of the European Parliament 2014-2019; SME Connect Special Advisor for Digitalisation & AI.
In his welcome address, DR. MICHAL BONI emphasized the significance of this period as the EU embarks on implementing the AI Act, acknowledging national-level challenges while also highlighting the opportunities presented by a newcoming college of the European Commission and Parliament. He emphasized the importance of fostering competitiveness in Europe by comparing drivers of innovation across regions, such as R&D investment, regulatory frameworks, and digital advancement, while addressing the need for a harmonized EU digital market, clear legal interpretations, and support for SMEs in AI development; they highlighted challenges such as fragmented regulations and implementation difficulties, stressing the value of co-regulation and soft law to keep pace with technological change.
JÖRGEN WARBORN MEP addressed Europe’s struggle to stay competitive in digital innovation, noting signs of falling behind other regions in growth but recognizing an opportunity to improve. He stressed the importance of refining digital strategies across sectors, particularly by revisiting and improving the GDPR and rethinking the AI Act. While acknowledging the need to address AI’s risks, he argued that the current horizontal AI legislation may overemphasize risk, potentially limiting innovation, and suggested a shift towards legislation that encourages growth, innovation, and business opportunities.
GIORGOS VERDI outlined Europe’s widening innovation gap in digital technology, highlighting the EU’s growing dependence on non-EU countries for digital products, reduced global market share in ICT, and lack of tech giants compared to the U.S., where leading tech companies have twenty times the value of their European counterparts. He attributed this gap partly to regulatory burdens, with nearly 95% regulations on digital products enacted, imposing complex requirements on SMEs, as well as structural barriers, such as the lack of a unified digital single market, limited venture capital compared to the U.S., and restrictive bankruptcy laws that deter risk-taking. Additionally, he noted the EU’s waning influence in setting global technology standards, in contrast to more assertive technological policies in the U.S. and China. To bridge this gap, he proposed streamlining and harmonizing EU regulations across member states, eliminating duplicate laws, and establishing support structures like regulatory innovation offices and quality-controlled regulatory sandboxes to aid SMEs. He also called for workforce development through better migration policies, tech visas, and updates to the Blue Card scheme to address the projected shortage of ICT professionals. Lastly, he recommended a proactive digital foreign policy, with an ambassador dedicated to digital strategy, to reinforce the EU’s global role in tech standards and digital governance.
ADAM BARTHA highlighted Europe’s lag in digital competitiveness, attributing it partly to a sharp increase in regulatory burdain since the Lisbon Treaty, which strain SMEs while larger corporations cope better. He criticized specific regulatory constraints, like those in the AI Act on computing capacity, for stifling innovation, and suggested a balanced approach focused on quality over quantity of regulations. Adam proposed creating a “28th regime,” allowing companies to create at EU level rather than navigating fragmented national systems, to simplify labor laws and licensing for SMEs. He emphasized incentives, including employee stock options, to make Europe more attractive for talent and argued that promoting a digitalized incorporation process— modeled on Estonia’s approach—could ease barriers for startups. He noted Europe’s comparative lag in private R&D investment due to structural factors like pension systems focused on redistribution rather than capitalization, recommending reforms to increase venture capital available for startups. Regarding foreign direct investment (FDI), he warned that excessive screening may limit capital inflows and suggested easing restrictions for NATO allies to streamline investments, thus supporting funding without the bureaucratic overhead. While recognizing the EU’s 1% GDP allocation to state aid (half to green and tech sectors), he urged exploring private-sector-driven funding solutions, viewing these as essential for building a sustainable innovation ecosystem.
MARCO PANCINI highlighted Meta’s regulatory challenges in Europe, emphasizing the delays and fragmentation in the approval process for training AI models with public data from Meta’s platforms. Meta aims to develop open-source AI models like Llama that are not only powerful but also culturally and linguistically relevant to European users, embedding them across their services for broader accessibility. However, compliance with GDPR, especially regarding consent, creates hurdles, as Meta must navigate complex, lengthy approval processes with European data protection authorities, contrasting sharply with quicker responses from counterparts in the UK. For example, while the UK’s Information Commissioner’s Office (ICO) provided guidance within five weeks, the EU’s centralized “one-stop shop” mechanism led to months of review and hundreds of follow-up questions from the Irish Data Protection Commission, which represents Meta’s EU operations. This prolonged timeline presents a major challenge given AI’s fast development cycles, which require frequent model retraining. Meta risks deploying models in Europe that are outdated by several cycles compared to models trained elsewhere, such as in the U.S., where three-month training cycles can be maintained. Without timely EU approval, European data may be excluded, potentially resulting in AI models that lack local language nuances and cultural contexts, affecting user experience and limiting the model’s relevance for European businesses. Marco also argued that the GDPR’s consent-focused approach, rooted in a “precautionary” framework, is problematic for AI. Unlike direct correlations in advertising, AI’s learning process is about identifying patterns from data, not directly replicating or outputting the same data. They suggested that viewing AI solely through a strict data consent lens limits its potential and recommended a more nuanced regulatory approach that considers AI’s unique mechanisms. Highlighting Meta’s recent partnership with EssilorLuxottica, a leading European eyewear company, Marco underscored Europe’s potential to lead in tech innovation. He expressed that partnerships like these, which aim to develop future- oriented technology platforms, underscore the potential for strong collaboration if the EU can create a regulatory environment that enables timely and effective compliance, supporting innovation without sacrificing data protection.
AYESHA BHATTI addressed Europe’s innovation lag, comparing it with the U.S. where a lighter regulatory approach contrasts starkly with the EU’s risk-focused digital and AI regulations. They argued that the EU’s “restrict-first, enable-second” model, exemplified by the AI Act, is already discouraging big tech from fully engaging in the European market, which then limits smaller businesses dependent on these core technologies. She called for a shift in mindset within the EU, urging lawmakers to see technology as a way to uphold core European values like privacy and democracy, rather than as an inherent threat. Ayesha advocated for increased innovation support in legislation, urging European Parliament to promote proactive policies that balance values and regulatory needs with technological potential. She recommended improvements in AI governance, including increased stakeholder engagement, rooted in technical realities, and strengthening pro- innovation aspects of the AI Act, like cross-border regulatory sandboxes to facilitate a digital single market. Streamlining approval processes and bolstering market surveillance capacity were also highlighted as essential for managing AI’s complexities. Finally, she stressed the importance of open data initiatives and privacy-enhancing technologies, recommending a cooperative rather than isolationist stance on tech sovereignty. Collaborating globally, particularly with allies like the U.S. and Japan, would better support the spread of European values in the digital space.
NATASCHA GERLACH discussed challenges for AI development and innovation in Europe due to the GDPR’s stringent, fragmented implementation. She emphasized that the EU’s approach places SMEs at a disadvantage, struggling to navigate dense regulatory obligations and inconsistencies, particularly between GDPR and the AI Act. Despite the GDPR’s intended role as a framework to facilitate data flows, its overly restrictive interpretation by some data protection authorities (DPAs) hinders innovation, disproportionately affecting smaller companies that lack resources. Natascha highlighted a key issue: while GDPR aims to balance data protection with innovation, current DPA approaches often lack flexibility and provide minimal positive guidance, leaving companies uncertain about processing data within compliance. She noted an example in the French DPA’s efforts to modernize GDPR interpretation, in contrast with other authorities issuing restrictive interpretations that create barriers for companies using data- driven AI models. She argued that DPAs should recognize the economic and cultural value in allowing European data to support AI models, promoting an ecosystem that incorporates local languages and cultural nuances rather than relying solely on English- language data scraped from global sources. Looking ahead, Natascha discussed the European Data Protection Board’s upcoming opinion on AI data processing, set for December 23, 2023. Although typically without public input, this opinion could significantly impact AI by establishing guidelines on using personal data in AI development, including first-party data, which is critical for local adaptation of AI systems. She called for more proactive collaboration across regulatory bodies, citing the UK’s Digital Regulator Cooperation Forum as a model for bringing together DPAs, competition regulators, and AI authorities to address complex tech governance issues collectively. Natascha argued that GDPR, if applied with flexibility, could be a powerful tool for facilitating innovation rather than a barrier. They encouraged ongoing, in-depth, and technically informed dialogues among all stakeholders—regulators, industry experts, and international partners—to evolve GDPR and AI regulations in alignment with technological advancements, supporting EU competitiveness and data privacy goals.
TILL KLEIN outlined challenges in implementing the AI Act, emphasizing the need for sector-specific regulations, streamlined compliance in complex AI value chains, and enhanced organizational capacity. Sector-Specific Standards: The AI Act’s horizontal approach clashes with the sector-specific use of AI, like in medical devices and education. Proactive, sector-focused standards involving public and private stakeholders are needed to ensure alignment and a cohesive EU market. Value Chain Compliance: AI systems now involve multiple providers, requiring standardized contracts to ensure compliance across the supply chain, especially for high-risk AI. Capacity Building: Many companies lack the structures for AI lifecycle management, needing tools, skilled staff, and training to meet technical and compliance requirements effectively. Till advocated for early development of sectoral standards, improved compliance frameworks, and practical support for companies to navigate the Act’s requirements.
In his concluding remarks JÖRGEN WARBORN MEP called for a long-term shift in Europe’s mindset to better support innovation, suggesting policies that reward risk, revise solvency laws, and build an EU-wide capital market. He criticized the “Brussels Effect”, where the EU’s regulatory ambitions, like GDPR, fail to set global standards effectively, proposing a “wait-and-see” approach for new tech like AI to allow innovation while regulating specific risks. He emphasized the need for regulatory harmonization across the EU to fully leverage the single market, expressing hope that recent reports indicate progress toward this mindset shift.