ChatGPT and its Impact on Higher Education Security
March 15, 2023
ChatGPT, the Artificial Intelligence tool released by OpenAI in late November last year, is poised to disrupt virtually every industry, including higher education. Academics worldwide have given in to the buzz and are openly embracing the tool with optimism that it could help unlock new avenues of teaching and learning.
However, the obvious elephant in the room is how ChatGPT can be a tool for students to cheat by asking the chatbot to write an essay on any given topic or provide answers to complex problems. But the challenges go far beyond the classroom. Our team at OculusIT is taking a closer look at the security risks that come along with this game-changing technology. What is ChatGPT’S cybersecurity risk barometer? What policies and practices should be implemented to mitigate the potential security threats of using ChatGPT? What is the threat to my institution’s data integrity?
As an AI language model, ChatGPT has security risks associated with its use, especially for higher education. Our team sat down and explored some potential security risks surrounding ChatGPT that your team should be aware of.
Data Privacy Concerns
As an AI learning model, ChatGPT requires access to a large amount of data to function. And since ChatGPT is only good as the data it’s trained on, it can provide inaccurate responses to its users. Furthermore, if the data used for training the model is not adequately secured or anonymized, it could pose a privacy risk to the students and faculty.
ChatGPT interacts with users through a network, which means attackers could intercept, access, or manipulate the user’s data. The conversations with ChatGPT may contain sensitive and personal information, which could be used for malicious purposes. Threat to sensitive constituent data, such as personal information or academic records, is particularly concerning for security and risk leaders. ChatGPT could be used as part of a social engineering attack to gain access to sensitive information. The platform can help attackers build trust with the user and gather the information that could be used for malicious activities like creating fake credentials.
Cyberattacks and Malware
ChatGPT is prone to malicious use. In fact, the platform can be easily used to write polymorphic malware and may even become a target of cyberattacks such as DDoS, SQL injection, or other attacks. These attacks could lead to service disruptions or compromise the underlying infrastructure that powers ChatGPT.
It is common for a natural learning model chatbot like ChatGPT to use filters to restrict user access to potentially harmful or inappropriate content. However, in a startling revelation, information security company, CyberArk’s researchers easily bypassed ChatGPT’s content filters to create “functional code” to inject a DLL into explorer.exe. ChatGPT, like any machine learning model, is susceptible to being trained on malicious data or influenced by biased or false information. If a threat actor trains ChatGPT with harmful content, the model may produce responses that contain false, misleading, or malicious data.
A Tool for Entry-Level Hackers
AI has made it possible for people with limited tech knowledge to create phishing emails and fake profiles. While the program does have some precautionary measures in place, it is not fool-proof.
For example, if you were to instruct ChatGPT to write a phishing email campaign for students with a general message about a missing tuition payment, it will simply generate a non-compliance response. However, malicious users are finding they can use the platform to create the general email communication and have it include a call to action to open an instructional Excel file that details macros must be enabled to view the tuitional information properly. Then in a separate query, hackers ask ChatGPT to provide the full string of code that automatically runs an exe file when macros are enabled in Excel. Furthermore, the tool explains to the users—in detail—how to implement the code. Once an individual marries the information together and deploys the email, your institution is at risk.
The sophistication of ChatGPT is seemingly limitless. Users can ask it for instructions on scanning for IP vulnerabilities, identifying vulnerabilities within existing code, and even having it write SQL injection vulnerable code. The launch of ChatGPT has created a flurry of opportunities for rookie threat actors with low technical skills. This development will congest the modern cyber threat landscape, overwhelming the volume of threats and the number of potential threat actors.
The free-for-all offer of ChatGPT for an unlimited time at this point makes the offer even more lucrative for rookie threat actors to ignore. Checkpoint researchers reported in January 2023 about a thread of Python code started on a hacking forum. The allegedly ChatGPT-generated code searches for common file types such as office files and PDFs, transfers them to a random folder, zips the folders, and uploads them to an FTP server.
Mitigating the Risks
It’s important to note that the risks of using ChatGPT are not unique to this particular model but are common to most AI-based systems that rely on network connections and data input. Ensuring that ChatGPT is used in a secure environment with proper access controls and safeguards to mitigate these risks is crucial.
Higher education institutions should implement security measures like encryption, secure network protocols, access controls, and vulnerability management to minimize these risks. Here are some steps that higher education institutions can take to reduce some of the risks associated with ChatGPT:
- Limit access to the platform: Not only will prohibiting access to ChatGPT from the campus-network help limit threats from within. Additionally, access should be granted based on job responsibilities, reviewed regularly, and given only to authorized personnel.
- Implement multi-factor authentication measures: MFA (Multi Factor Authentication) should be required for all users accessing ChatGPT to prevent unauthorized access.
- Train users on security best practices: In campus-wide security training initiatives, training should be provided on ChatGPT dos and don’ts, including how to safely identify and avoid phishing attacks.
- Incident response planning: Ensure your institution has a plan in place should there be a security breach or other incident involving ChatGPT.
- Monitor usage: Institutions should monitor all network usage of ChatGPT to detect and prevent potential misuse or cybersecurity threats.
While the ChatGPT technology is still new and evolving in sophistication each day, it is critical to stay on top of its latest trends and threats to ensure your institution is protected.
If your institution is looking to bolster its IT (Information Technology) security program, we can help. OculusIT offers robust Managed Security Services exclusively for higher education.
The Role of CISO and Security Operations Center in Reinforcing Safeguards
ChatGPT is a powerful tool that can generate text responses to a wide range of prompts, including potentially sensitive or confidential information. As such, there are several risks associated with using ChatGPT, such as the potential for it to generate inappropriate or offensive content, leak sensitive information, or be manipulated by malicious actors.
To mitigate these risks, it is important to have a robust cybersecurity strategy in place, which includes the appointment of a Chief Information Security Officer (CISO) and a dedicated Security Operations team. The CISO is responsible for overseeing the organization’s information security program, ensuring that all security policies and procedures are in place and that the organization is compliant with applicable regulations and standards.
The Security Operations team is responsible for monitoring the organization’s network and systems for potential threats and vulnerabilities, and responding to security incidents as they occur. This includes regular vulnerability scanning and penetration testing of the ChatGPT application, as well as ongoing monitoring of user activity to identify potential threats or breaches.
In addition to these technical measures, it is also important to establish clear policies and guidelines for the appropriate use of ChatGPT, including restrictions on the types of information that can be shared and safeguards to protect sensitive information. Regular training and awareness programs can help ensure that all users understand the risks associated with using ChatGPT and know how to use the tool safely and securely.
Recent Articles
The Biggest Concerns Facing Higher Education CIOs
April 26, 2023
A Day in the Life of an OculusIT SOC Analyst
April 21, 2023