So, some of you have been asking me through Twitter and other social media about boxdox-bb, and the fact that it doesn’t have a traditional listing of framedata, and instead, just serves as a reference to how the moves are scripted. There is a huge reason for this, and until further notice, it will likely remain that way.
But you can do it for SF5 fine! You just explained how you do it!
Yes, but the same method has absolutely no chance of working for Blazblue.
Sometimes I feel like the way framedata works is just as abstract and random as this video about plumbuses. Anyways, lets start from the beginning.
Note: This article was sponsored by my Patrons: To help support the creation of more content like this, please consider becoming a Patron here
What is Framedata Anyways?
When people talk about framedata, they are generally talking about a specific set of properties associated with a attack in a fighting game. Knowledge of these properties can be used to understand how fast a move is, what combos after said move, and how safe the attacker is when the move is blocked. This data is essentially what dictates the flow of the game.
Startup – How many frames does the attack take to become active
Active – How many frames does the attack remain active
Recovery – How many frames until the character can move or block after the move is over.
Hitstun – How many frames is the opponent stunned when the attack hits
Blockstun – How many frames is the opponent stunned when the attack is blocked.
To give an example of a situation where framedata is effective, lets say you are playing SF5, and its a mirror match; Ryu vs Ryu, the classic matchup. You opponent keeps doing st.MP followed by st.MP, hadouken on block. You keep feeling like you have to just sit back and watch it happen, as pushing any button just ends up with you being counterhit by the second st.MP.
Ryu’s st.MP is +1 on block. It has a 5 frame startup. This means that you have a 4 frame gap to do a move before the second st.MP hits. Using a list of framedata allows you to come to this conclusion, and then look for 4 frame or less moves to counter your opponents simple blockstring.
So, it just hit me that the yield keyword that I wanted to use to use to implement a bot in Python can actually be used in C# in an iterator. This has inspired me to update my design to use yield.
Let me explain what I mean. Now, normally, my bot goes through a loop like this.
Collect information about current game state.
Check and see if something is happening that requires the bot to stop what it is doing to react
Continue doing what the bot was doing.
I implemented this using a simple state framework, where each state has variables it uses to store how far it is along doing whatever its doing. Each frame the state is called, and it presses what every buttons it needs to press that frame. The annoying part about this is that if you wanted to make a state that say, presses D, F, D, F, HP, you would have to write it so that it’s a function that gets called 5 times, and inputs the proper button on each frame. That would look something like this.
int i = 0;
if(i == 0)
if(i == 1)
I don’t like this, and it was the primary thing that made me get annoyed working on KenBot.
Now, the yield keyword isn’t normally used the way I am preparing to use it, but it actually solves more than one issue. The updated code will work like this.
yield "I just pressed D";
yield "Oh yeah, just pressed F";
In this example, it may seem like its the same about of glue code in between the code for each button press, but there’s normally a lot more decision making and programming going on than this.
I am going to try to refactor my code this weekend and see how far I get into this and see if it helps anything.
So, I put a lot of work into the most recent framework for KenBot, and I created something that was usable for more than just me. I brought it to EVO, but I wanted to spend more time enjoying EVO than sitting around babysitting a Kenbot station, so not many people got a chance to play it. The code repository I published has been used by multiple people to create SF4 playing bots for different characters, to the point where I consider it to be a success.
However, I am not satisfied with the framework. It is clunky to use, and I want to make something that I can modify and use for any game, period. To do this, I need to abstract some major features out of my current codebase.
So, today I finished work on http://finalclause.dantarion.com/hitboxes
It is a site that allows you to view hitboxes, hurtboxes, and sprites for UNIEL. However, it took a lot of work to get from nothing to this, so I wanted to document this. So lets start with the files on the disc!
Getting to the files
Once I had the disc extracted onto my computer, I looked around and located what looked like the character files.
I assemed that .pac was a container, and that they were gzipped. I used WinRAR to extract these and I took a look inside using Hex Workshop. Examining the file quickly showed me a simple structure with a list of file offsets, sizes, and pointers to filenames. Writing an extractor took about 15 minutes.
Now, it was onto the next step. Getting the ART.
Getting the Art
I referenced muave’s Melty Blood viewer for information on the .HA6 file, which contains all of the art. Its just another container file, containing uncompressed DDS textures. I was able to write an extractor easily, which left me a folder of DDS textures. After realizing that the DDS formats for all of them was the same, i modified my script to output PNG instead of DDS.
So, now we have the sprites?
No, not exactly. You see, video card hardware works best with textures that are multiples of a power of 2, like 128, 256, 512, etc. It seems like French Bread used some kind of tool that split up their sprites into 32×32 squares onto a texture that is 512 pixels tall. In addition to that, the textures aren’t stored colored. Since the game lets you select a color scheme for each character, the textures are stored with an indexed, palette. This means they are all 1byte greyscale, where color 0, black, references palette_color and color 255, white, references palette_color.
For the next step, I needed to apply the palette as well as take the 32×32 tiles and rearrange them into sprites.
The “cg” file contains the information about the texture chunks and how to arrange them into sprites. Luckily, it also contains the default palette for each character. After a lot of trial and error, the above picture ended up as…..
Some of you may look at this and say….Why is his hair cut off?
The hair is actually stored in a separate sprite for this particular frame. The file that holds the character script dictates which images are drawn where, along with the hitboxes, hurtboxes, character state, etc. But that ill have to talk about next post.
July 24th saw the release of a new fighting game, Under Night In-Birth, on both PSN and retail media. As with any fighting game, understanding move properties is very important to getting an advantage on your opponent. In the past, this data was gathered through intensive training sessions and extended periods of gameplay. After hundreds of hours of gameplay, you can figure out, X move is faster than Y move, etc.
However, its 2014 and I don’t have time for that. Kappa.
Lets get the data we want!
The first thing I did was buy the game, and get it running both on both the retail and jailbroken PS3. Then I examined the games disc structure. A quick glance around showed a folder called script. A quick glance in that folder showed a bunch of plan text scripts.
There was literally a text file with the words “NoLocalDebug” in it! And in it contained a list of constants for debug mode, with everything set to 0 (off)
So, I look this file and changed all the 0’s to 1’s, enabling all the listed debug functions available in the game, and booted it back up! The result was some extra data appearing in-game! The game now shows the startup, active, and recovery frames of each move, as well as frame advantage on hit/block. Heres a video of it in action!
And heres another!
However, this was not good enough for me. Its one thing to be able to see the framedata in-game, but its another to be able to review the framedata offline. Ideally, youd want all the info laid out in front of you in a table, allowing you to study it when away from the game.
I will cover the voyage towards that in my next post
KenBot v1 was very basic. Most of his gameplay revolved around one specific chain of events.
Is the opponent doing nothing? Mash DB,DF.
Is the opponent doing something near me? Mash D+PPP,F+PPP.
Am I being thrown? Mash Tech!
This alone proved effective, but with some limitations. I can’t seem to get past about 3 frames of input delay. This means that KenBot will never be able to react to a 3 frame move on reaction! However, since he mashes DB,DF, he has a 50% of randomly blocking one of them anyways. Command grabs will hit him, unless they are slow like Abel, Honda, etc.
Here KenBot counters cr.LP and cr.LK, but NOT st.LP or st.LK, because they are 4 frames So, this means that many jabs, and a ton of command grabs will just hit KenBot! And Shoryukens! And…a bunch of other stuff too. At this stage KenBot didn’t understand overheads, or moves that were too fast to punish with DP or Ultra. You could Focus backdash at the right distance and KenBot would fierce DP!
He had to become smarter! He had to become more aware! And in order to do so, I needed to get more feedback from the bot! So, I took KenBot, who at this point had a ton of hardcoded reactions for a few character states, and rewrote the code and added a GUI.
My next article will be about training KenBotv2 to…do a lot more than DP.
So, the world likes KenBot.
Twitter followers. http://twitter.com/dantarion
Thanks to all that have been spreading my work around. Its pretty awesome to see something I made in the past week get thousands of views.
Normally I work on boring things like modding tools, or reference sites. Its fun to work on stuff that is a bit more entertaining for the average gamer out there.