Be vigilant when using ChatGPT

While harnessing innovation is key to “Engineering the Extraordinary,” we must also successfully navigate the risks of emerging technologies that can impact your private data.

Toward the end of 2022, OpenAI released ChatGPT – an AI chatbot that accepts prompts and gives outputs with human-like responses. In less than three months since its release date, there have been many discussions, debates, and controversies surrounding ChatGPT‘s capabilities, limitations, and potential for ethical and unethical use cases.


While ChatGPT is groundbreaking, it does pose some risks.

  • First, ChatGPT stores all user inputs and uses them to improve its service, meaning the information we input will be stored and, in the future, can be disclosed to others. Recently, Amazon employees observed that ChatGPT‘s outputs contained code from its own internal proprietary and confidential software. Amazon employees noted how ChatGPT could correctly answer internal code questions.
  • Another concern is compliance with data protection laws. ChatGPT is compliant only with California’s Consumer Privacy Act (CCPA) and is not HIPAA or GDPR-compliant. There can be potential legal ramifications if you input any patient Protected Health Information (PHI).
  • Additionally, computer hackers are actively discussing how they can leverage ChatGPT for phishing campaigns and malware development.
  • Furthermore, ChatGPT can spew misinformation. It’s important to verify any information generated by the model.  If users input inaccurate information, ChatGPT will learn it and output the respective inaccurate information to anyone who prompts it. The model may inadvertently generate false or misleading information, which could potentially harm the reputation of the company or lead to incorrect decisions.
  • Also, as with any machine learning model, ChatGPT can pick up biases and prejudices from the data it is trained on. This means that the output generated by the model may not always be entirely objective or unbiased, especially when it comes to sensitive topics. This could potentially lead to unintentional discrimination, causing harm to individuals or groups.
  • Moreover, it can be tempting to rely heavily on ChatGPT for critical tasks, such as medical diagnoses or legal advice. However, it’s important to remember that ChatGPT is not infallible and should not be relied upon as the sole source of information. In such situations, it’s important to seek out additional sources of information and to consult with experts in the field.
  • Finally, ChatGPT‘s current information goes only through 2021, so anything beyond that is above its capabilities.


While ChatGPT and other AI tools can be useful in many applications, they can also pose business 3 | Cyber Threat Intelligence | Global Security Office | Internal Use Only and information protection risks.

While ChatGPT can be an incredibly useful tool in the workplace, it’s important to recognize the potential risks involved. We should be vigilant about the inputs we put into it, and avoid inputting sensitive, confidential, or proprietary information.


If you are unsure how to solve your data issues or would like to speak with an expert to learn more, Anyon ConsultingBI group can help! Our database experts and consultants that can answer any questions on customize dashboards, help with your database implementation, optimize your database platform, and much more. Contact us today to learn more about our  Custom Database Development.

Scroll to top