Iโve just submitted the first 5,000 words toward my Professional Doctorate at Anglia Ruskin University, Cambridge, a major milestone on this academic journey.
The focus of this reflective report was to critically assess my professional practice in the context of my research into AI and Cybersecurity Governance, Risk, and Compliance (GRC), and more importantly, to establish my position as a Pracademic.
Why pracademic? Because I live at the intersection of industry and academia:
โAs a Cyber and AI GRC professional, I see firsthand the real-world complexity organisations face, especially those with an aggressive appetite for AI adoption.
โAs a guest lecturer and an advisory board member for Computer Science department for two UK universities, I witness the urgency for curricula and research to stay grounded in practical relevance.
โAs a regular speaker and participant at well-known and insightful technology, cybersecurity, and AI conferences, I engage with industry leaders, share insights on emerging trends and promote cybersecurity best practices.
The synergy between industry and academia isnโt optional, itโs essential. Itโs how we build trustworthy, responsible, and secure AI systems.
And weโre seeing exciting moves in this space:
โป๏ธOxford University will lead AI security research through the new UK National Laboratory for AI Safety โ a critical step in proactive threat mitigation.
Read more – [https://lnkd.in/ei652tFe]
โป๏ธUKRIโs ยฃ54m investment into secure AI will help address major national and global challenges.
Read more – [https://lnkd.in/eQTtR_59]
โป๏ธThe AccentureโThe Alan Turing Institute partnership is setting a new standard for integrating responsible innovation into AI strategy.
Read more – [https://lnkd.in/ePBHucq9]
These collaborations are reminders that the best solutions emerge when academic rigour meets real-world urgency.
๐To shape the future of AI responsibly, securely and we must stop thinking in silos and start building in synergy.