Game Career Guide is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Get the latest Education e-news
 
  • Postmortem: Cybervisulization

    [04.27.21]
    - Justin Neft

  • What Went Wrong

    Development:

    • Lack of clear direction with no clear and defined vision for the project.

    • Very little project management.

    • It was a big task for three people, especially one being part time.

    • We didn't get real data until the day before the competition. We had to make lots of assumptions until then

    • Github caused some problems due to our lack of experience with the tool.

    • Shifting competitions caused some development time to be lost.

    • We had very few coding standards.

    • We didn't do nearly as much research as we should have.

    Product Launch/Production:

    • There was no full broadcast dry run

    • We had to cut a lot of features (injects and inject videos, attack visualization).

    • We had to change how data was read in mere hours before the competition went live.

    • Our main role was software developers. This gave us little time and experience to prepare as a broadcasting team.

    • We tried to schedule the project to avoid crunch time, but it still ended up occurring.

    This was one of the first major products that our team members had shipped. As such, we encountered many issues that we did not initially anticipate.

    One of our most prominent issues was finding clear direction and vision for the project. This is something that is inherent with working in an emerging field. When a topic is so new, it's hard to find inspiration or reference material. This sometimes made it very difficult to decide what our visualizer needed to be and what features it needed to have. Properly researching cybersecurity concepts and the competitions in question would have also majorly helped here.

    Having a hard to define direction made it difficult for our small team to have clear project management. Although we were agile and able to make quick decisions, we found a bit too much safety in that methodology. There were often moments where we would ask "what do we do next?" Having a designated project manager would have helped a lot. Unfortunately, with a team of 2 full time developers and 1 part time, we didn't see a need for one at first. We were able to plan out what was needed for the visualizer once we had a deadline.

    Part of our development and planning was stunted by a lack of real data. We weren't able to get real and properly formatted data from the scoring system until a day before the competition. We only learned how the data was formated a day earlier. This meant that a lot of development time was spent making systems based on assumptions. For example, each node in the visualizer corresponded to an IP address. In reality, the scoring system passed us service check names rather than IP addresses. Having to quickly reformat the code to work with the real data contributed to the crunch time as well as a loss in visualized data.

    Early on in the project, we decided to use tools that we knew rather than learn new tools. This allowed us to hit the ground running and produce results as early as week two. One of the tools we used was GitHub. Unfortunately, we did not know GitHub as well as we thought. GitHub's .gitignore file is an extremely useful resource for preventing files from being tracked. However, we learned far too late that the file is a preventative measure, not an active protection. After only a week of the .gitignore file being in the wrong folder, GitHub had started tracking thousands of junk Unity files. This in turn started to create hundreds of merge conflicts. Eventually we realized the issue and had to spend an entire day learning GitBash to solve it.

    Some development time was lost early on due to a change in what competition we were building for. Originally the tool was being designed for CPTC. This competition mainly focuses on student teams attempting to attack a network and report their findings. A few weeks in, the project pivoted towards building for NECCDC. Although both competitions visualize similar information, NECCDC is focused on student teams defending a network. The way information is gathered between each competition is different, as well as the way we needed to visualize it. This caused multiple already developed features to be shelved since they were no longer relevant for NECCDC. Fortunately the plan was to return to developing for CPTC after NECCDC had completed, so shelved features may be used in the future. In terms of building for NECCDC however, those features were lost time.

    One major issue that became apparent late in the development cycle was a lack of a consistent coding standard throughout the team. Although we all have decent methods of commenting and writing clear code, they are distinctly different. This made it difficult to try and quickly understand another's code when they weren't around. This caused a few issues in the final days before the competition. Having a consistent standard we all followed or regularly reviewing code with each other could have helped alleviate these issues.

    We didn't spend nearly as much time as we should have doing research on cyber security topics, the competitions we were building for, or even on how to commentate a live stream. Having a better understanding of cyber security would have helped us work more closely with the Cyber Security Capstone team we've been working with. Knowing the competition we were building for would have helped us determine what we needed and how to get it early on. With our visualizer being in the early stages, it was only able to visualize if a service was on or off. We didn't learn until during the competition that teams were actually being locked out of their systems, not having their systems turned off. As for the live stream, we learned a lot about what a commentator would need from our visualizer by being the commentators ourselves. It was less than desirable to learn what we needed from the tool when we needed it rather than beforehand.

Comments

comments powered by Disqus