IT Governance & Leadership
Responsible AI at Risk: Understanding and Overcoming the Risks of Third-Party AI
Do responsible AI programs address the risks of third-party artificial intelligence tools? A panel of experts weighs in.
Do responsible AI programs address the risks of third-party artificial intelligence tools? A panel of experts weighs in.
Hackers can use generative AI to plan more effective attacks, but companies can use it to strengthen their cyberdefenses.
Inflation and supply chain disruption are exposing the risks of relying on a subscription model in some markets.
Organizations can apply biology-inspired adaptive design principles to become more resilient against cyberattacks.
Wayfair’s CTO describes how the e-commerce retailer uses AI and machine learning to support customers and manage risk.
An empowered strategic integrity function is key to developing a more proactive and systemic approach to governance.
Code reuse is common, but leaders must be aware of potential vulnerabilities to mitigate exposure to risk.
Lessons and insights from past cyberattacks can help companies prepare and respond more successfully to future threats.
Stanley Black & Decker’s CTO discusses responsible and sustainable AI and how the company uses AI to innovate.
To reduce ethical lapses, organizations need systems for anticipation and systems for resilience.
The data science management process, job moves for pay equity, and political concerns in M&As.
When managing a merger, pay attention to political disparities.
JoAnn Stonier, chief data officer at Mastercard, discusses how design thinking enables better AI implementation.
Anticipating and withstanding cyberattacks — cyber resilience — must become a companywide concern.
In our new spring issue: platform-based ecosystems, blockchain, data failures, and misbehaving leaders.
Understanding the subconscious drivers of strategy, responding to regulatory risks, and making sense of conflicting advice.
Platform companies should act quickly to temper regulation that erodes network effects.
Bridge-building for business and data teams, responsible AI practices, and smart time management.
Organizations need to develop more-robust processes to ensure responsible use of AI.
Boards will need increased technology fluency to provide adequate oversight of AI risk management.