Paper Summary
AI chatbots are being rapidly adopted into our lives in many forms due to their anonymity, utility, increasingly anthropogenic behavior, and several characteristics that implicitly and explicitly exploit various human cognitive biases and frailties. Given that AI chatbots are going to dominate our communication and interaction through all forms of electronic media soon, we have a responsibility to be aware and vigilant of the increasing opportunities to be misinformed, exploited, and influenced. I summarized several psychological factors known to influence our willingness to engage with and accept AI chatbot feedback. Examples are given of various nefarious applications of AI chatbots, some of which are already established, and that serve as another reminder to critically evaluate all sources of information available to us. I also use a high-profile seismic imaging example to illustrate how easily facts can be misrepresented.