This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember your browser. We use this information to improve and customize your browsing experience, for analytics and metrics about our visitors both on this website and other media, and for marketing purposes. By using this website, you accept and agree to be bound by UVic’s Terms of Use and Protection of Privacy Policy.  If you do not agree to the above, you can configure your browser’s setting to “do not track.”

Skip to main content

How do you use AI?

December 01, 2025

The AI environment is expanding and changing daily--how do researchers use AI in their work and what concerns do they have about the progress of the tool?

UVic sociologist, Michael Ma, psychologist Jim Tanaka and geographer, Ji Won Suh share their views on how they currently use and view AI.

How do you use AI in your work?

Sociology associate teaching professor, Michael Ma, uses AI, like many of us, through Google.

“When I pose a question to Google, the first response now is always with their AI,” says Ma. “So that would be the most common way in which I use AI in my work; but AI is also the default in most search engines.”

Ma explains how AI is now “baked into” most programs and devices, including Microsoft 365.

“When we use powerpoint or Word and Outlook we are using GenAI. Even Excel spreadsheets will recommend formulas or help solve spreadsheet issues. Ma has tried Microsoft Word with Copilot but says, “I did not find it that useful, at least the version I have is not so helpful.”

When it comes to ChatGPT and ClaudeAI, Ma says, “ it can help to do a whole assortment of things such as formulate a particular question or provide a citation or help rephrase and integrate a document or set of problems."

For more specific queries, Ma says he often uploads a PDF to Notebook LM or Scholar GPT and then "asks it to clarify certain research questions that I have in regard to the uploaded document.”

Actually, the question of “how do you use AI in your work” is moot, says Ma, “since most of would agree that use of Microsoft, Google, or Apple products would already constitute a steady and consistent use of AI –both simple and sophisticated AI-- everyday and all day.”

Ji Won Suh, geography, assistant professor uses AI in her work in a variety of applications.

“My work focuses on the application side of AI rather than developing new AI algorithms. I use AI to monitor anthropogenic activities—such as deforestation and construction—using satellite Earth observation data. More specifically, I apply AI models to automate the mapping of anthropogenic features that previously required labor-intensive manual work, enabling regional-scale mapping to be conducted far more efficiently."

"I also use AI models to learn patterns from dense satellite time-series data—relationships often too complex for humans to interpret directly—to detect and track construction-related disturbances across space and time."

"In a practical sense, I also use generative AI models such as ChatGPT to improve and streamline my programming scripts, which helps me build more efficient workflows for processing big geospatial data."

Jim Tanaka, professor of psychology

Dr. Tanaka says he uses AI, "to model the psychological categories of experts and novices.”

“AI provides me with experimental tools to investigate the differences and similarities between machine experts and human experts.”

Do you have any concerns about the progress of AI?

Ji Won Suh, geography, assistant professor

“One of my major concerns about the progress of AI is the growing inequality in its use. As AI tools become more advanced, the gap between people who know how to effectively leverage them and those who do not may widen, leading to significant differences in productivity, opportunities, and economic outcomes.”

“I am also concerned about unresolved issues related to copyright and data ownership. In many cases, input text or user-generated data can become part of massive training datasets, and it is not always clear whether creators are adequately protected or credited.”

“Additionally, as AI systems become increasingly capable, there is the risk of people becoming overly dependent on them. This reliance could gradually weaken our own problem-solving abilities and critical thinking skills, especially if AI is treated as a default solution rather than a tool that supports human judgment.”

Sociology associate teaching professor, Michael Ma,

“That is what concerns me: the lack of clarity. We need leadership regarding clear rules and regulations around AI use so that there would be no confusion as to how and what it can be used for. We need that clarity for both for faculty and students.”