All Insights Article Humanizing The Artificial Intelligence Foundation For Success

    Humanizing The Artificial Intelligence Foundation For Success

    Artificial Intelligence

    Humanizing The Artificial Intelligence Foundation For Success

    AI has taken the world by storm, finding widespread adoption across life sciences: in pharmaceuticals, MedTech, animal health, clinical health, biotech, consumer health, and others.

    Humanizing The Artificial Intelligence Foundation For Success

    Life sciences enterprises across the globe are turning to artificial intelligence (AI), recognizing it as one of the most prominent technological investments right now. AI has taken the world by storm, finding widespread adoption across life sciences: in pharmaceuticals, MedTech, animal health, clinical health, biotech, consumer health, and others. However, most companies in these segments are headed into that storm with their eyes closed and one hand tied behind their backs.

    AI is cross-functional. Understanding this fact is fundamental to AI’s success in any life sciences organization, but it is often the most overlooked. According to a 2018 MIT Sloan Management Review report1, seven out of ten companies across all industries report minimal to no impact after adopting AI technology. But every company that has achieved significant results from AI exhibits distinct organizational values and behaviors. Building a successful strategy to leverage AI is not just a matter of implementing technological solutions but designing a strong “human” foundation. Companies, including those in life sciences, must focus on reforming their organizational culture to exploit the full potential of AI tools.

    ASSEMBLING THE DREAM TEAM

    If AI technology is the mighty Titanic carrying companies across the ocean of success, cultural debt2 is the deadly iceberg waiting just out of sight. Cultural debt can accumulate over time if a company fails to address cultural issues. Current industry practices reveal a stark divide between the research and engineering aspects of development, which is highly harmful to an institution in the long run. Successful prototypes fail to work on the ground level, ingenious models become unnavigable jungles of entangled code, and data pipelines cannot sustain real-time results.

    Culture debt refers to the negative consequences that can arise from an organization neglecting to prioritize or invest in its workplace’s culture. 

    It is vital to foster a team culture that rewards planning, avoids code complexity, and embraces a proactive inspection of features, stability, reproducibility, and prudence for accuracy. The best way to achieve this is in heterogeneous teams with strengths in both research and engineering.

    A common error companies make when implementing this strategy must be noted: often, individuals have to report to an engineering and research director. The result is a small subset of frustrated employees reporting to two different (and usually conflicting) branches without the authority to make necessary decisions at the required time. These employees often end up with “No Authority Gauntlet Syndrome” (NAGS)3 and burnout. They may even bear the blame that belongs to cultural debt. The lesson is clear: heterogeneity only helps if it exists across the entire team.

    These cross-functional teams can exist at many levels across a life sciences organization, including technical teams that work with business and development teams. A specialized team may involve data scientists, project managers, product managers, and developers. Business teams can help ensure that AI expertise is leveraged to promote the company’s overarching mission and isn’t restricted to isolated cases. Moreover, developers can involve end-users in design decisions to encourage broader product adoption.

    CORE LEADERSHIP PRACTICES

    When implementing AI, life sciences organizations must redefine their overall strategy and business model. Business analytics must become a part of the organizational culture, infused in all levels of management. Data-driven decision-making skills cannot be acquired simply by recruiting data scientists. Management should be able to leverage big data and continuously search for innovative ways to use analytical systems. Business requirements are dynamic, and leaders who know the nuanced applications of AI technology and its ability to impact an organization’s people are needed. McKinsey & Company highlights eleven core practices4 that will help leadership realize AI’s potential value at scale:

    1. The organization uses data (both internal and external) to support the goals of AI work effectively.
    2. The organization has access to internal and external talent with the appropriate skill sets to support AI work.
    3. Senior leaders demonstrate actual ownership of and commitment to AI initiatives.
    4. For business processes where AI has been adopted, it is integrated into day-to-day operations.
    5. The organization has a clear strategy for accessing and acquiring data that enables AI work.
    6. The organization runs effective continual processes for developing a portfolio of the most valuable AI opportunities.
    7. The organization has mapped where all potential AI opportunities lie (including the required level of investment, the difficulty of implementation, and the potential value at stake).
    8. Employees trust AI-generated insights.
    9. The organization has the proper technological infrastructure and architecture to support AI systems.
    10. All relevant data are accessible by AI systems across the organization.
    11. Frontline workers embed AI into formal decision-making and execution processes.

    EVOLVING EMPLOYEE ENGAGEMENT

    The adoption of AI doesn’t stop at the management level; employees are vital on the chess board. The growing integration of AI technology in workplaces has raised employee concerns about job security. Organizations can proactively address these concerns and foster a culture of trust and empowerment.

    Clear and transparent communication is crucial to allay employee fears. Organizations must comprehensively explain the purpose and benefits of implementing AI technology. While AI excels at automating routine tasks, human workers bring essential skills like emotional intelligence, creativity, and critical thinking. Life sciences organizations should underscore the value of these skills and create roles that leverage human strengths alongside AI technologies. Employees can better understand their evolving jobs by emphasizing AI’s role as a complement to human capabilities rather than a replacement.

    Investing in reskilling and upskilling programs5, where employees can gain the skills necessary to adapt to changing job requirements, can help mitigate job worries. This enhances their employability within the organization and boosts their confidence in their ability to thrive alongside AI technologies. Organizations should also explore job redesign options to accommodate AI while ensuring continued employee engagement. This may involve task redistribution, creating hybrid roles that combine human and AI capabilities, or identifying new areas where employees can contribute their expertise.

    Involving employees in the decisions around AI implementation gives them a sense of ownership and control and shows them that their expertise and opinions are valued. Including them in the AI design and deployment process allows them to shape their roles and contribute to the organization’s success. This involvement should be reflected in feedback and support mechanisms. Managers should engage in open conversations, providing constructive feedback and guidance on how employees can adapt and thrive in an AI-enabled workplace. Establishing a supportive environment that recognizes individual progress helps build confidence and reduce insecurities.

    AUTOMATION DOES NOT EQUAL AUTONOMY

    Implementing AI in life sciences organizations offers immense potential for growth and efficiency. However, it also raises important ethical considerations. Since AI technology is always ahead of regulations, we must all take responsibility for it.

    Ultimately, holding an algorithm responsible for any mishaps is impractical. And regulations, like Europe’s General Data Protection Regulation (GDPR)6, compel businesses to explain their decision-making procedures, including those executed by automated systems when requested. There are several ways organizations can mitigate associated risks.

    Machine learning (ML) algorithms are complex, making it challenging to uncover their internal workings. This is why they are commonly referred to as “black boxes.” An extensive array of parameters can characterize deep learning algorithms, but merely listing these parameters does not suffice as a proper explanation. Life sciences organizations must invest in people capable of providing and communicating this information to relevant stakeholders. Businesses must ensure that accountability is assigned to individuals for every step of the AI lifecycle, encompassing data management, decision-making, and feedback loops. The human agent in this process guarantees that an algorithm’s decisions can be explained and safeguards against any potential bias in its operations.

    Self-learning systems cannot operate without supervision. Since AI algorithms potentially make thousands or even millions of decisions per minute, any inherent biases or issues can be quickly amplified. Organizations must consistently oversee these systems throughout their entire lifecycle, from selecting data, generating outputs, and taking actions. This ongoing monitoring ensures that the system functions according to its intended purpose.

    CONCLUSION

    The influence of AI is only going to grow. By reforming their organizational culture and embracing AI as a transformative force, life sciences enterprises can realize their full potential and gain a competitive advantage in today’s fast-paced industrial landscape. Those who build a solid human foundation alongside AI technology will be better positioned to navigate the business storm and thrive in the future. Starting today, life sciences organizations must open their eyes, untie their hands, and embrace AI with a strategic and human-centered approach.

    REFERENCES

    1. Ransbotham S, Gerbert P, Reeves M, Kiron D, Spira M. Artificial intelligence in business gets real. MIT Sloan Management Review. September 2018. Accessed August 18, 2023. https://sloanreview.mit.edu/projects/artificial-intelligence-in-business-gets-real/
    2. Sculley D, Holt G, Golovin D, Davydov E, et al. Hidden technical debt in machine learning systems. Neural Information Processing Systems. 2015. Accessed August 18, 2023. https://proceedings.neurips.cc/paper_files/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf
    3. Kuhn C. Beware the no authority gauntlet in your organization. LinkedIn Pulse. June 12, 2018. Accessed August 18, 2023. https://www.linkedin.com/pulse/beware-authority-gauntlet-your-organization-cheri-dudek
    4. Chui M, Malhotra S. AI adoption advances, but foundational barriers remain. McKinsey & Company. November 13, 2018. Accessed August 18, 2023. http://www.mckinsey.com/featured-insights/artificial-intelligence/ai-adoption-advances-but-foundational-barriers-remain
    5. Fredriksen S, Skjærvik M. Understanding the role of organizational culture on artificial intelligence capabilities and organizational performance. University of Agder. 2021. Accessed August 18, 2023. https://uia.brage.unit.no/uia-xmlui/bitstream/handle/11250/2825428/Simen%20Fredrik%20Brunvand%20Fredriksen.pdf
    6. Sartor G. The impact of the general data protection regulation (GDPR) on artificial intelligence. European Parliament, Panel for the Future of Science and Technology; June 2020. Accessed August 18, 2023. http://www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU(2020)641530(ANN1)_EN.pdf

    Recommended insights

    Humanizing The Artificial Intelligence Foundation

    Blog

    Unveiling the Dynamics: Promotional Blackouts and Their Influence on Baseline Sales

    Humanizing The Artificial Intelligence Foundation

    Blog

    Decoding AI Hallucinations: Unmasking the Illusions of Generative AI

    Humanizing The Artificial Intelligence Foundation

    Blog

    Building Robust, Generative AI-Ready Datasets