This weekend, Hungary’s parliamentary election could shape the future for both the country and Europe. Voters will choose whether to keep Fidesz, led by Viktor Orbán, in power or try a new direction. Recent polls show that Péter Magyar’s opposition party, Tisza, might win on Sunday.
Viktor Orbán is seen by many as one of the most polarising leaders in Europe, so people across the continent have been watching Hungary closely. There have been many reports of possible election interference, and experts are worried that those may undermine the integrity of the elections. But so far, little attention has been paid to the role of AI models in shaping voters’ choices. Liberties has a report coming out on this topic later, but the findings are so important that we wanted to share some of them before the election.
Why is this important?
General-purpose AI systems are now a common way for people to get political information and advice. The way these systems share information can directly affect elections, and their influence is growing. However, it is not clear how these AIs make decisions, and they are not held to the same standards as the media or civil society groups.
Because of this, the Civil Liberties Union for Europe (Liberties) studied how two popular AI systems, OpenAI’s ChatGPT and Google’s Gemini, give political advice about the 2026 Hungarian elections.
How did we work?
For the research, we used sets of political beliefs that match the parties in the Hungarian election, as defined by the creators of the Voksmonitor voting advice app. We gave these belief sets to ChatGPT and Gemini to see what voting advice they would give. In theory, a perfect AI would always recommend the party that matches the beliefs. But that’s not what we found.
What did we find?
Both AI systems struggled to match some voter preferences to the correct parties. The AIs could easily identify beliefs that matched Fidesz, but often got it wrong with Tisza. ChatGPT, in particular, did not give match percentages for Tisza, even though it gave detailed results for other, smaller parties, including some not running this year. When given a Tisza-aligned profile, ChatGPT mentioned Tisza only 20% of the time and never as the likely most important party to be considered by the user.
Both Gemini and ChatGPT still give political advice that can influence voters, even after saying they cannot offer voting guidance. Also, these chatbots often lack user safeguards and present even uncertain or unreliable sources with confidence. This makes their advice seem more accurate than it really is. In reality, both systems overemphasize some parties, ignore or downplay others, and are not accurate in matching preferences.
What does this mean for (Hungary’s) elections?
We do not know how many people use AI systems for political advice. Still, the way ChatGPT and Gemini give voting advice is concerning. People whose views match Tisza might be told to vote for parties that are unlikely to get enough votes to enter parliament, which could mean their vote is wasted. The AI models only give clear voting advice when the beliefs match Fidesz, so they do end up favouring the ruling party. Whether this is because there is less information about other parties or for other reasons, it shows that AI models are not a neutral source for voting advice.
Overall, our findings show that AI systems are not reliable or transparent when giving recommendations. There is no indication that their biases are on purpose, but they still threaten fair political competition and people’s ability to form their own opinions.
Further reads
EU Elections Monitoring: Who tries to influence your vote on Facebook?