Skip to main content

The sustainability of early AI adoption

share arrow printer bookmark flag

June 17, 2024

Artificial Intelligence (AI) is beginning to transform businesses – creating new opportunities and driving significant advancements in various sectors.

However, as businesses start to adopt AI, challenges related to trust and transparency have emerged.

Not only are these issues functionally problematic but deeply intertwined with the ethical, social and environmental aspects of AI use.

In exploring the dual-use dilemma of early AI adoption, businesses can leverage AI responsibly using existing governance frameworks.

Current, future challenges of AI adoption

One of the primary challenges businesses encounter in the early stages of AI adoption is the lack of expertise in integrating AI into their operations.

Many organizations struggle to effectively implement AI, manage data security and align AI systems with their strategic goals.

This knowledge gap can lead to inaction and missed opportunities, creating competitive gaps within industries.

However, though challenging, these early adoption issues can be overcome with time and effort.

Beyond initial adoption challenges, there exists a fundamentally harder issue around the transparency and trust of AI systems and their generated responses.

AI models, particularly those based on deep learning, often operate as “black boxes” – making it difficult to understand how they arrive at specific decisions.

This lack of transparency can erode trust among stakeholders, including employees and customers.

Viewing transparency and trust as sustainability issues can help address these challenges. Extending an ESG (Environmental, Social and Governance) framework to AI provides a structure for governance around measuring, testing and managing AI operations.

This can build trust in business practices and reassure consumers of the company’s responsibility to their communities.

An exercise in ESG implications for AI

Though AI offers numerous benefits, it also comes with substantial environmental costs.

The energy consumption required to train and run large-scale AI models contributes significantly to carbon emissions.

Investment in data center technology for AI workloads is growing, making sustainable practices increasingly critical.

The difficulty of discerning truth from fakes at scale has profound social implications. AI-generated content – including deep fakes of images, audio and video – can spread misinformation, eroding trust in public figures and institutions.

Tools like elevenlabs.io (text to audio) and HeyGen (audio+image to video) are powerful but have the potential for misuse.

Another socioeconomic issue is AI energy consumption, which correlates with higher GDP countries, creating accessibility and equity problems.

The cost of AI will eventually be passed on to users, creating further inequalities.

AI’s training data and session interactions can introduce biases, leading to socially concerning outputs.

For example, AI may correlate urban geographic demographic data with academic performance, influencing job recommendations unfairly.

This highlights the need for transparency in AI’s data “supply chain” and auditability for governance.

Regulating and managing AI to ensure sustainability and ethical use is complex.

The rapid pace of AI development outstrips regulatory frameworks, creating gaps in accountability and transparency.

Effective governance requires robust policies to prevent misuse and ensure ethical standards.

A pragmatic approach

Cost accounting for early AI adoption is difficult – especially when AI use is exploratory and benefits are intangible.

Businesses need enhanced methods to capture AI’s full impact – including direct, indirect and hidden costs.

Qualitative and quantitative methods should measure intangible benefits like employee and customer satisfaction.

AI excels at processing large-scale data and identifying complex patterns, making it suitable for solving significant, scalable problems with direct societal or economic impacts.

By prioritizing high-impact AI projects, businesses can help ensure meaningful and sustainable benefits.

Pilot programs can test AI solutions before full-scale deployment.

Using AI for general productivity presents a tradeoff between improving employee well-being and capturing productivity gains.

This complicates discussions on remote work and performance management but is necessary to ensure mutual benefits for employees and businesses.

Businesses should use broad metrics to measure AI’s impact – including energy consumption, productivity gains and employee feedback.

As AI adoption matures, more refined metrics can capture nuanced impacts within the organization.

Third-party ESG services can provide visibility into how your practices and sustainability claims align with policy changes and case law, helping businesses stay compliant and monitor changes over time.

ESG governance, though still relatively young and ever-changing, offers a head start on many of the same challenges of sustainable AI use.

For instance, gener8tor and U.S. Ventures collaborated to provide a sustainability accelerator program this spring for several up-and-coming startups.

One of those startups, Softly Solutions, founded by Mollie Hughes, is addressing the risk of surveillance and market intelligence for CPG businesses.

Softly Solutions helps businesses navigate green marketing regulations and ensure compliance.

A pragmatic approach may involve limiting AI to high-impact solutions where the benefits outweigh the costs.

This allows businesses to engage with AI adoption cautiously, refining their approach over time.

Strategic recommendations for AI sustainability

  • Adopt a phased approach: Start with broad metrics and develop more nuanced measures as AI adoption progresses. This allows organizations to learn and adapt strategies based on initial insights.
  • Focus on high-impact areas: Direct AI efforts toward solving critical problems that offer clear, scalable benefits. This maximizes the positive effects of AI adoption.
  • Assess with a systems approach: Evaluate AI’s impact within the broader industry and community context, considering interdependencies between business units and functions.
  • Engage stakeholders: Involve employees, customers and other stakeholders in shaping AI policies. This inclusive approach ensures AI practices are fair, transparent and aligned with broader organizational goals.
  • Integrate into existing ESG frameworks: This ensures AI considerations are included in broader sustainability efforts and reporting.
  • Ensure ethical and sustainable practices: Implement governance frameworks that promote the ethical use of AI and sustainable practices. These should prevent misuse and maintain ethical standards.
  • Report transparently: Regularly publish corporate sustainability reports (CSR) that include detailed information on AI’s impact. Engage third-party services to manage policy and case law changes, ensuring compliance with regulations and risk management.

To address trust and transparency issues surrounding early AI adoption, businesses are strongly encouraged to take strategic steps toward ethical and sustainable AI use.

By adopting a systems approach, leveraging existing ESG frameworks and engaging stakeholders, organizations can develop consistent and open measurements of their progress.

These efforts will foster a trustworthy and transparent AI ecosystem that benefits both businesses and society.

TBN
share arrow printer bookmark flag

Trending View All Trending