March 5, 2024
The dual-use dilemma of Artificial Intelligence (AI) — and its impact on society and industry — is a concept that highlights the technology’s capacity to serve both beneficial and potentially harmful purposes.
At its core, it refers to technology’s ability to offer groundbreaking solutions and efficiencies on one hand, while posing significant ethical, privacy and security challenges on the other.
This balancing act is particularly evident in the realm of prompt engineering, a nuanced area of AI that qualifies the expectations and trustworthiness of resulting answers between AI and the requestor.
A double-edged sword
Prompt engineering is the art and science of crafting inputs (prompts) to elicit desired outputs from AI models — particularly large language models (LLMs) like ChatGPT.
This technique can narrow the context and enhance the quality of AI results to greatly empower users, enhance productivity and foster innovation.
However, it also opens the door to prompt interception or manipulation, where the intended meaning or request can be subtly or materially altered without the user’s knowledge or consent.
The power of prompt priming and personas
One form of prompt engineering is prompt priming and the creation of prompt personas.
According to a piece titled “What is Priming the Prompt?” on Medium — understanding and leveraging prompt priming is crucial for users.
This involves setting up “profiles” or contexts within AI tools, like ChatGPT, to tailor responses to specific needs or preferences, enhancing the utility and personalization of interactions.
Safeguarding privacy with prompt interception
Another form of prompt engineering is the manipulation of prompts after the user submits a prompt request, but before it is sent to AI.
On the security front, prompt interception and manipulation can be a force for good — particularly in protecting personally identifiable information (PII) or adhering to Health Insurance Portability and Accountability Act (HIPAA) regulations.
By filtering out sensitive information before it’s processed, organizations can harness AI’s power, while safeguarding privacy.
Innovations in AI safety
A positive example of this is Plurilock AI PromptGuard, a chat wrapper that allows users to select the LLM (large language models) and apply specific prompt interception/manipulation rules, ensuring a safer and more controlled AI interaction.
The user has visibility and control of the rules employed and can audit how their requests were changed throughout their session — such as masking personal information during the use of AI but restoring the data in the final output.
Unintended consequences
Conversely, the technology’s darker side emerges when chat engines rewrite the meaning of prompts without transparency — which can lead to misinterpretations or biased responses.
The recent controversy surrounding Google’s Gemini project, which involved injecting diversity prompts into user requests without clear disclosure — underscoring the need for vigilance and transparency.
The unintended consequence was displaying historical images but replacing the historical context with images of diversity.
Empowering individuals and businesses
The takeaway for individuals and businesses is clear: mastering prompt engineering is not just about leveraging AI’s capabilities but also about being informed and vigilant.
Understanding how to craft and control prompts is crucial for harnessing AI’s potential responsibly.
Insisting on transparency regarding the filters and logic applied to interactions with AI is essential.
By doing so, the pitfalls of miscommunication can be avoided, and reliance on AI technology can be both productive and safe.
A commitment to being responsible AI citizens, advocating for thoughtful discussion and policy that address the technology’s complexities, is encouraged.
Together, navigating the dual-use dilemma of AI, embracing its benefits while mitigating its risks, is achievable.
More to come- in future columns, I will explore the vast landscape of AI.
Stay tuned.