Square Enix AI Tech Demo Game Portopia Serial Murder Case is Rated as Worst Games on Steam

worst games on steam

Just last Sunday, Square Enix AI division released an AI Tech Demo Game named Portopia Serial Murder Case on Steam. The game is meant to be a technical presentation of Square Enix’s attempt to integrate Natural Language Processing or NLP in short into video games.

Unfortunately, things didn’t go as planned as just 4 days after the release the game received a Very Negative review on its Steam page and is considered as one of the worst games on Steam.

The Portopia Serial Murder Case itself is by no means a bad game, as the title itself was originally released back in 1983 for the NEC PC-6001 platform. It sold over 700,000 copies which can be considered a huge achievement, especially by 1980s standards. 

According to the media at that time, the original Portopia Serial Murder Case was also considered a success in Japan as its game concept allows players to achieve its objectives in multiple approachesー something that isn’t common in the past. Not only that, the press at that time also described that even though the game itself has no actual “game over”, it surprisingly has a well-written storyline as well as a twist ending.

Thus, it isn’t a surprise when Square Enix themselves rightfully calls the game “the first real detective adventure” game.

Fast forward 40 years later, and unfortunately, the same thing cannot be said about the re-release of this title.

The concept of NLP itself isn’t really a bad thing for this game and in fact, it could be a perfect solution to surpass the technical limitation of the original game. This is because the original Portopia game is considered as part of an adventure game genre and back in the 1980s, an “adventure game” is basically meant as “typing stuff into a computer”.

For those who are not familiar with the technology, an NLP or Natural Language Processing is a branch of artificial intelligence development that focuses on enabling machines to understand, interpret, and generate human language. In video game usage, it is basically an AI that is meant to enable NPCs to provide a dynamic unscripted response to the players.

If you wanted to see more about the “proper” demo implementation of this kind of technology in video games, you might want to give Inworld Origins a peek. The game itself is still in limited early access however it is planned to be released on Steam sometime this summer. 

Fortunately, 2kliksphilip has made a video showcasing the game as well as providing an in-depth explanation as well as interesting experimentation to see how capable this technology is. We will link the said video down below.

Going back to Square Enix’s Portopia Serial Murder Case AI tech demo, it’s hard to see the implementation of such tech in this title would flopー but it unfortunately is.

Simply put, the concept itself is fine, if not, perfect, however, it fails in the execution.

In concept, the flow of the game should be like in this trailer video:

From the trailer, we see that players can write their own replies when being asked by the NPCs instead of the traditional choose-your-response kind of thing. It creates a good dynamic and provides a really good immersion as it gives the impression of the player has full control of the game.

Then, what went wrong?

Unfortunately, most if not all player responses that are shown in the trailer is a scripted answers. Therefore, players are expected to give a reply with those exact sentences or something really close to that for the AI NPC to understand.

A good example can be seen in these Steam reviews:

worst games on steam

In this first review, we can see that the player first made a silly response to which the AI understandably responded with “Maybe we should focus on the task at hand?”. However, when being asked using actual serious questions such as “Who are you?” It loops into the same generic response as the first one despite the question itself making sense.

worst games on steam

The second review shows even more obvious proof that this NLP AI fails to work as intended as it forces the player to put the exact scripted prompt to progress. 

In this case, the players seem to be aware that the next destination to investigate is the port and therefore they put “lets go to the port”. Then the AI responds with “I’m not sure what to say about that” to which then the user attempts to be more specific on the reply multiple times only to be greeted with the same “error” reply.

And after multiple attempts, they decided to give up and then just put a random destination as a reply but surprisingly the AI responded by suggesting visiting the port instead which is ironically kind of funny.

On a serious note though, this really shows how early this NLP tech from Square and this might beg a question, is NLP really that bad?

Fortunately, the NLP itself is not an issue in this case as Inworld Origins can demonstrate it properly despite how early it is. 

Now you might be wondering, why is it so bad in Square Enix’s case? The answer is that it seems that the NLP network that Square uses is not one like what ChatGPT uses, but instead, it’s one that they develop themselves. This means that the game does not rely on the data from OpenAI’s server but insteadー stored locally within the game hence why the file size is quite huge for a simple game demo.

Does that mean the AI that Square developed in-house is bad? Not exactly, as the report says that in this game, Square Enix is actually disabling this exact feature which defeats the purpose of this demo. The reasoning behind it is that they are unable to stop the AI from generating “unethical replies.”

In summary,

Square Enix has released an AI Tech Demo game to showcase its NLP AI technology on Steam however it got showered with bad reviews as the game turns out to be disappointing resulting in the game becoming one of Steam worst rated games.

The reason why it flops is that the company decided to disable the core function of this showcase which is the NLP AI itself because they are unable to figure out how to stop the AI from providing “unethical replies”.


Leave a Reply