"Harvard Designs AI Sandbox That Enables Exploration, Interaction Without Compromising Security"

Generative Artificial Intelligence (AI) tools, such as OpenAI's ChatGPT, Microsoft's Bing Chat, and Google's Bard, have quickly become the most discussed topic in technology, sparking talks about their role in higher education and more. Harvard announced its initial guidelines for using generative AI tools in July, and strong community demand presented University administrators with the challenge of meeting this need while addressing the security and privacy flaws of many consumer tools. Klara Jelinkova, vice president and University CIO, together with Bharat Anand, vice provost for advances in learning (VPAL), and Christopher Stubbs, dean of science in the Faculty of Arts and Sciences (FAS), proposed an idea to a group of faculty and staff advisers, which is a generative AI "sandbox" environment, designed and developed at Harvard, that would enable exploration of the capabilities of different Large Language Models (LLMs) while mitigating security and privacy risks. This article continues to discuss Harvard's AI sandbox aimed at enabling exploration without compromising security and privacy.

Harvard University reports "Harvard Designs AI Sandbox That Enables Exploration, Interaction Without Compromising Security"

Submitted by grigby1
 

Submitted by grigby1 CPVI on