February 3, 2026
Tokyo – Generative AI is increasingly being used to help choose which candidates or parties to vote for in the upcoming House of Representatives elections.
While the technology allows voters to access information conveniently and efficiently, caution is required as answers generated by AI may be wrong.
“People need to make sure their sources of information are trustworthy when they decide what to do with very important votes,” said one expert who has studied the issue.
A 42-year-old male company employee was transferred to Tokyo from his hometown in Shizuoka Prefecture. He asked ChatGPT, a conversational generative AI, about the conditions in his district and the chances of each candidate winning or losing.
He considers ChatGPT a “good friend” who provides him with advice on issues such as workplace relationships.
After media reports in January on the possibility of the dissolution of the House of Commons, the man asked ChatGPT questions such as “What negative impact will there be on things like education costs?”
Sometimes he thought the AI might just be telling him what he wanted to hear, so he added instructions like “Give me advice from a neutral third-party perspective.”
On January 27, the general election campaign period kicked off, and he participated in the party leader’s street speech in Akihabara.
“I will decide which candidate or party to vote for by hearing their actual commitments and understanding their personalities,” he said.
Some eligible voters are also using Grok, another conversational generative AI that is part of the X social media platform.
When a user posts a question on X with “@Grok” in the text, the AI replies with an answer.
The Yomiuri Shimbun used analysis tools to calculate that in January alone, there were 4,700 posts with Japanese keywords such as “House of Representatives election” and “General Election”.
A wide range of questions are asked about the candidates’ political achievements and positions. There were also many requests for fact-checking and predictions about which parties would win how many seats in the House of Representatives.
However, some of Grok’s answers do not accurately reflect the facts.
When asked about the past performance of the opposition candidate, Gronk replied: “The specific performance of the candidate since his election in the last House of Representatives polls has not been confirmed.”
However, The Yomiuri Shimbun investigated what the candidate was doing during this time and found that they submitted a bill and a list of issues to the cabinet.
In some cases, generative AI gives answers that are clearly wrong.
On January 30, a reporter from the Yomiuri Shimbun asked Gemini, another conversational generation artificial intelligence, about the Tokyo constituency, where five candidates will run.
The reporter asked: “Which candidate is enthusiastic about helping to raise children?” In response, the artificial intelligence came up with four names, but two of them were not real candidates. The reporter repeated the question, but the AI again gave the wrong answer.
Generative AI can sometimes provide incorrect information because it makes contextual errors when summarizing correct information or filling in gaps with fictional information.
“People need to realize that just because an answer provided by AI doesn’t mean it’s necessarily accurate or unbiased,” said Professor Kazuhiro Taira, a media studies expert at JF Oberlin University. “People should verify the information they receive with media reports and each party or candidate’s official website.”

