
But the 300 sample set did work satisfactorily well as you can see in the following play-through video I estimate maybe a sample set of 1,000 to 4,000 would have been more adequate, but it is a very time-consuming process, enemies don’t stand still for you that often and what player models the game gives you per match is completely random (I later discovered the bot_stop command which does make enemies wait around for you).

This was nowhere near enough training data for a flawless model as there are many different Counter Terrorist player models and a number of different maps and backgrounds.
I MISS SHOTBOT OFFLINE
The offline training was very simple, it just loaded the 300 samples of Counter-Terrorists (mostly head and upper body samples) and 300 samples of random background scenery. I then designed a small program to do the offline training, the trained weights were supplied into the original autoshoot program for real-time detection and shooting. I used this program to collect 300 samples of only Counter-Terrorists for this demonstration and, 300 samples of random background scene samples. These samples were saved out to file in a range of different formats all being in R,G,B order 8 bits/1 byte per channel raw pixel data, 3 floats per channel 0–1 normalised, 3 floats per channel zero centered and finally the set that I used for training, 3 floats per channel mean centered (each image normalised by the standard deviation of the entire dataset per colour channel). I first designed an adaption of the FPS bot which would take little picture snapshots of what I aimed at in the reticle field and save them to file when I activated certain keyboard keys, I also had a key that would show a framing border so that I could line up the shots before taking samples. Of course I knew what the problem was, in real-time training, it was hard to get the random sample variation that a good training model needed, in the original article where I trained a small neural network of 3x3 pixels to target mostly aqua blue models, the scene was much less complex and easier to train against having mostly just a black or grey background with no real textual variation (because I had disabled textures) but in CS:GO I was training against noisy textured pixel data so to attain a well-trained model I needed to first sample a dataset and then train a network on that dataset in an offline manner.

Although it was not as easy to train as I would have liked it to have been, particularly on such a simple and defined object as a mostly white round football with black pentagons chequered across its surface. Where we last left off I had managed to use real-time training to train the network I had dubbed as TBVGG3 to detect and shoot at the football in the map Dust II, with very little to no miss-fire.

In continuation to Part 2 of “ Creating a Machine Learning Auto-shoot bot for CS:GO.” using my minimalist adaption of the VGG network originally designed by the Visual Geometry Group at Oxford University I have managed to use offline training to get satisfactory head-shot results in the game of Counter-Strike: Global Offensive. Creating a Machine Learning Auto-Shoot Bot for CS:GO.
