I Have A Few Posts Waiting For Some Final Touches

published 28 Dec 2013

I have a few posts waiting for some final touches, but as I am likely to be busy going into the new year with osu! updates, osu!tablet shipping and a bit of travel, I hope you can enjoy this in the mean time!

comments

Stopping a DDoS

published 19 Sep 2013

Anyone who is a regular osu! player will be well aware of the troubles I have been experiencing over the last few months keeping the servers online. Daily DDoS attacks have meant constant interruptions across the board, but with the majority focused on Bancho – the server-side component of osu! responsible for multiplayer, chat, user presence and providing osu! with up-to-date player stats and ranking details. I thought it would be interesting (and hopefully beneficial to someone in the future) to write up my experience of combatting such an attack to the best of my ability.

First, let’s go into the knowns of the attack.

If you are not familiar with what a DDoS attack is, I highly suggest reading up on them before continuing to read this post. I am expecting a lot of questions asking why I can’t just “block the IPs” of the attacker or similar, and while I will try and answer that, you will probably get better answers reading wikipedia due to the sheer scope of what can be involved in such an attack.

The attacker

DDoS attacks are regularly used as a way of bringing attention to a specific issue, group, or single user’s demands. They force a service to become aware of whatever the attackers want, holding it at random to an extent. For this reason, it is often the case that the user or group committing the attack will publicly take responsibility for it, and provide proof that it is indeed them performing it, in order to gain recognition.

In this case, even while going out of my way to find out who was responsible for the attacks, it was very unclear until late into the picture. Via relayed IM logs, I was eventually able to get an idea of who was responsible, and what they wanted from us. As I expected, a user had been banned for cheating – in this case over a period of many years – and wanted all of their accounts unbanned. This is obviously a demand which I would never agree on.

The power source

A single user is usually not capable of launching an attack that would take down a service like osu! without another party managing the botnet and/or servers responsible for providing the bandwidth to launch the attack. In this case, the attacker was making use of multiple publicly available “stresser” or “booter” services, which provide a web interface in front of the infrastructure required to launch attacks. This allows an attack to be launched by simply entering a target IP address, port, attack type, length and hitting the “GO” button.

a stresser

These services usually charge between $3-20 an hour depending on their reliability and strength. They sit under the legal veil of being “stress testers” which are made to be used on servers you own to test how they will stand against an attack. They usually contain no contact information and are very clearly geared towards users with different intentions.

pricing

It is safe to say from periods of analysis (where a small subset of the data is logged and parsed during an attack) that there were both large spanning botnets and a few high-powered servers involved. Whether these were compromised servers, or servers rented by the “stresser” services themselves, they were capable of reaching attack velocities up to – and in a few cases exceeding – 10Gbit/s. This is a sizeable force to deal with.

polish

The target (osu!)

osu! is run from a number of diverse locations around the world, with database slaves and download mirrors distributed for performance and redundancy. The core servers are all rented at Softlayer’s SJC datacentre. After years of searching for a datacentre which manages to just do-it-right, I ended up with Softlayer, and I have been impressed with their reliability and support 95% of the time. The pricing is above what you would pay elsewhere, but they offer benefits such as private networking, portable IP addresses and free PPTP VPN access which others do not provide.

The osu! website has been sitting behind CloudFlare for over six months now. I was initially skeptic about using a service like CloudFlare, as it is adding an extra unknown between your service and the internet which you have very little control over – if something was to go wrong at their end, I have no power to fix it. While in the last six months this has happened occasionally, the overall result of switching CloudFlare on has been very, VERY positive. I would love to go into the specifics of this in another article.

CloudFlare can handle DDoS attacks. They can handle, mitigate and cut off the source at a level datacentres may not be able to do. They have a knowledge of how attacks happen and how they can be stopped with minimal consequence and downtime. During the period of attacks on osu!, the website did not flinch once. The attackers either knew they had no chance of messing with CloudFlare, or tried and failed to cause any harm. Unfortunately for us, Bancho is a completely TCP-based protocol, running over port 13381 with a custom protocol I engineered specifically for osu!. As CloudFlare only handle HTTP traffic, putting Bancho behind CloudFlare was simply not an option.

Bancho, sitting in Softlayer’s datacentre, is guaranteed a certain level of protection that is offered with all servers, in the form of a Cisco Guard firewall. While renting these devices permanently is outside of my limited budget, Softlayer are kind enough to dynamically reroute all traffic through one should they detect an incoming attack. Once this occurs, the firewall will intercept and filter traffic, delivering a clean stream of data to the end server for 24 hours, after which the conditions are re-assessed and the device is usually removed. Should attacks keep up, Softlayer also reserve the right to null route you server’s IP, deeming it useless for 24 hours (with no traffic ever reaching it). Cisco Guard and null routing is done on a per-IP basis, which allows a bit of flexibility should multiple IPs be assigned to a single server. This turned out to be very useful during the initial stages of the attack.

Other osu! services – such as download mirrors – were not heavily targeted by these attacks. Even if they were, it is minimum-impact and easy to re-route to another location. There are also several mirror run by other kind people which provide downloads should the official mirror go down.

Testing the waters

Long before the recent wave of attacks even started, there were occasional DDoS attacks detected against osu!. At the time, I assumed these to be random – sometimes people can be looking to test their botnet out, or hitting an IP which used to belong to another service. It is easy to see the these were no mistake, and in hindsight were the prologue to the main period of attack. The first of such attacks was on May 19th.

The force to deal with

Starting around July 5th, I began noticing an increase in the number of incoming attacks. This is most easily seen on a graph of incoming traffic to the server running bancho:

incoming traffic to bancho

Note that each of these spikes was usually a series of independent attacks on that day, and that while the maximum traffic shown on this graph is 1Gbit, they regularly exceeded this, but Cisco Guard kicks in at this point so it is not visible here.

Course of action

Most datacentres are not fit for dealing with DDoS attacks. 99% of them will resort to null routes as a solution to clients under attack. Softlayer offers firewalls which have DDoS protection, but due to the size of the attacks, even with such protection added the IP endpoints would likely be null-routed to protect the larger network, and reduce the effect on other clients sharing the same routing infrastructure.

There are services which offer DDoS mitigation, by placing a “proxy” between your server and the internet and eating the DDoS traffic. The price of these range from $50 to upwards of $10,000 a month. For the level of cover required by osu!, we are looking at the expensive end of the scale. For what it’s worth, I did try – and am still using for bancho’s IRC gateway – Staminus, which offers cheaper options that null-route on a very fine scale, making recovery fast after the attacks stop, for a relatively affordable price.

It was time for thinking outside the box. We need a solution which will not only stop these attacks, but prevent them from happening again in the future.

I have long wanted to add UDP support to bancho, allowing for faster round-trip times and lower overheads when establishing connections, but in this case UDP would not help. Instead, let’s consider adding HTTP support. Why HTTP? Because CloudFlare!

Firstly, if you are following closely you are thinking one of two things right now:

  1. So you’re completely re-hauling osu! to use a REST approach?
  2. What? But bancho is a streaming protocol! You’re crazy! You’re doing it wrong!

While I would love to try the first option – and who knows, maybe eventually this will happen – I was looking for a quick solution, which could be implemented in a few days maximum. Rewriting from scratch with a fresh protocol and architecture in this amount of time is just not feasible. So let’s move to the crazy option. Piping a streaming protocol over HTTP.

It may not be as crazy as it sounds. These days HTTP widely supports keep-alive, which means a single TCP connection can be used to transport multiple requests. This reduces the connection establishment time drastically. All that remains is the overhead that comes with HTTP headers, which can be reduced by not including any headers which would not be used by bancho. Including the bare minimum headers is still necessary, such as the HTTP protocol version and transfer type.

Even so, in order to establish the responsiveness of a streaming connection over a non-streaming protocol would require quite frequent sends – hopefully at least once a second. We can consider two cases here: one where the client requests something of the server and expects a response, and a second where the server has a waiting command/request of the client. The majority of osu! requests are initiated from the client-side, so we can optimise with the first case in mind.

Case 1: client has a request of the server

In this case, we can treat the request basically as a REST request. Assuming there is no existing request to the server, we can instantly send a new HTTP request, and wait for on the response. As we are encapsulating a stream here, we don’t want to send a request if there’s an outstanding request, as this could cause weirdness to all ends.

Case 2: server has a response waiting for the client

If the response is already being waited on by the client, there is likely already an open HTTP connection. If not, we resort to polling from the client. Depending on the current state of the client, polling will occur every 1-20 seconds. If the user hasn’t moved their mouse in a while, or the osu! window is inactive, the polling interval will scale back over time, resulting in less unnecessary traffic. When active, the perceivable latency added by polling is next to zero, as the previous poll is kept waiting on a response to the extent of the polling interval (within reasonable limitations). This means we always have one HTTP request open waiting on a response.

Before starting on this approach, I contacted CloudFlare stating what I wanted to do, and their thoughts on the matter. I was particularly concerned about the number of requests this would cause, and also whether they permitted this kind of usage of their service. Their response was to make sure that connections were not kept open for long periods, and to upgrade my plan to a higher tier (Business / Enterprise) to account for the load. I was already aware of their policies on long-standing connections, so planned for this from the start. Upgrading my account was the least of my concerns, and still cheaper than any other DDoS mitigation option of this scale. We are good to go!

Path to recovery

Launching was fairly painless and went without any hitches. I was thoroughly impressed with the deployment, and how smoothly everything went.

a lot of requests

CloudFlare handled the new load like a boss, and the attacks stopped. After some quick iterations optimising the poll timings, bandwidth usage was lower than TCP connections used to be, which surprised me, but was strangely comforting at the same time.

The resolution of this issue was so sudden that there really isn’t much more to say. CloudFlare are a power to be reckoned with, and are doing a great part in protecting the internet1.

Final thoughts

It is now a month after deploying this solution. Things are still running smoothly, and we haven’t “seen” an attack since. Note that this doesn’t mean there haven’t been attacks. One downside of being behind CloudFlare is unless they report a fault on their twitter/status page, you are totally unaware of what is going on on the other side. There have been very short periods of reduced traffic, and debugging these cases is quite frustrating when you are unable to see exactly what is or is not being blocked from hitting your servers. Based on the infrequency of this happening, I choose to have some faith for now.

As I said previously, adding a service like CloudFlare in front of web servers is adding another unknown. It is therefore important to know that routing is reliable and support is there. CloudFlare have not disappointed on either of these, offering support turn-around times of <30 minutes in most every case, with a knowledgeable engineer rather than some outsourced level 1 tech guy (more than I can say for most datacentres out there). Their routing is amazing, nothing more to say there.

While I’ve tried to go through everything in great detail, there is a whole lot more to this story, my implementation of the new bancho protocol, and what I have learnt over the last couple of months. If you want to know more about any specific facet, please leave a comment!

  1. Some of the services used to launch these DDoS attacks from are sitting behind CloudFlare, protecting themselves. I reported these to CloudFlare but it seems as though they will not act against a service unless they match very specific criteria. You’d think they would be against sites used to attack CloudFlare itself, but hey, who am I to decide that ^^; 

comments

Frictionless Updates

published 27 Aug 2013

One area of development that has both interested me and consumed a lot of my thought time over the years has been the deployment process for osu! (and games in general). As far as deployment is concerned, I have got things down to a very concise process at my end, allowing me to push a variety of new builds/updates for osu! out to you guys with a few key-presses. In this post, I’d like to focus on the other side of the picture – what you see when an update is available, and how that update is applied to your game.

Let’s begin by understanding how updates are seen by end users, and the various methods of deployment that can change this perception.

When a user clicks the icon of a game, they want to play the game. Anything that gets in the way of this should be avoided from a developer’s perspective. Most games force users to update before the game starts, via an enforced patcher – sometimes referred to as a launcher – that serves as a gateway to starting the game (as seen in every MMO, League of Legends, Starcraft 2 etc.). When updates are only being released once a week – or much less in many cases – I can only see this as a horribly inefficient approach which inconveniences users to no end. I’ll go out on a limb here and say that most gamers’ minds have been trained over the years to accept the added 5-20 seconds spent at the patcher (only to be told their game is up-to-date) as something which must exist. Add up this wasted time and we’re probably looking at some big numbers.

launcher patching is zzz

(I have to make a call out to Blizzard who have recently changed their game patchers to have multiple update “levels”, allowing the game to start without applying non-critical updates. This is an improvement, but the launcher is very hard to understand as a result, with stages such as “optimising”, which should IMHO not be the concern of the end-user.)

Ever since the beginning of osu!, I have avoided this method of updating. Until recently, osu! would only launch the patcher if an update was found. The user would always be running osu! and in the game ready to play. In this scenario, unless a critical update is released, the user is able to get into their game instantly without interruption, and start playing.

This may seem like a huge improvement, but I have learnt over the years that this backfires to a certain extent. Because osu! users are generally used to the gratification of being able to interact with the game so quickly, when they are interrupted by the game closing to run the patcher, they are more aggravated than they may have been with a “launcher” style patch process. Even though the update-as-an-afterthought I had been using is more efficient overall, due to the way users perceive interruptions, it seemed to be shedding more of a negative light on updates than I expected, with regular storms of complaints in in-game chat following the release of patches which bring new features and bug fixes players should be pumped to receive.

So it was time to re-think the update process. I am a fond user of Chrome as a browser and have to applaud them for their awesome update process, which goes something like this:

  • The browser runs a daemon in the background which checks for updates on what we will assume is an hourly basis.
  • When an update is found, files are patched in the background.
  • If the browser isn’t running, the patch can be fully applied, and next time the user opens the browser they are magically (and usually without being aware) on the latest version!
  • If the browser is running, Chrome’s menu button changes from the usual black icon to a glowing green, letting the user know that an update is present.
  • Should the user choose to do so, they can complete the update process from within that menu (triggering a browser restart), else the browser will update itself next time it is closed.

Due to the slickness of this process, I decided to switch osu! across to something similar. I’m not too fond of running a daemon in the background, so had to skip out on this part (which does make the final solution a touch less slick), but moving forwards this can be implemented if osu! gets to the point of running in the background for other (cool) reasons :).

The updater now lives in the main menu, in the form of a subtle message in the bottom-left corner while downloading/applying updates, and a spinning arrow to notify when osu! can be restarted to finish the process. There are no progress bars or external applications involved – it all lives inside osu! itself.

spinny things

If the user wants to play on their older version, they can continue to do so. There is nothing forcing them to update. On the other hand, if they want the latest and greatest fixes/feature additions, they have the one-click option to update their game. The process is very quick (as the patch has already been applied in the background, and is differential using binary diff patching), just requiring a restart of the client.

From what I can tell, this has seen an improvement in how users perceive updates, and there is no longer a outbreak of complaints in chat whenever an update is made available. There are still some areas I would like to improve in the future, though.

  • At the moment, if osu! has an update ready to apply on next startup, but another update is released since then, it isn’t smart enough to check for this before running. In this case, it takes another restart of osu! if the user wishes to apply the next update.
  • If an update is found before osu! finishes launching to the menu (for people that have large beatmap collections, for instance), it would be best to apply it at that point, since it will add nominal time to the startup process.
  • There are rare cases which the external updater is still launched as a fail-safe. I think I have ironed all these bugs out, but I still see a small number of users having this happen to them. Eventually I want to deprecate the external updater (osume.exe) completely, so this will need to be resolved.
  • Consider the daemon option. Updating in the background would be an amazing improvement from a user experience point of view. If I was to take this direction, rather than running a daemon on startup (which is a bit obnoxious), osu! would just remain in memory after “exited” by the user (until system reboot) and keep things warm. Memory footprint would be very small in this state, of course.

Any thoughts on the update process? Any issues or suggestions? Let me know, since I’m always looking to improve!

comments

The Last Two Months in osu!

published 03 Jul 2013

I’ve held off writing these (originally) weekly posts because I feel like gathering the required content to make them interesting – screenshots, links and such – to be quite a large time sink, which I would rather spend on making things happen. I also feel like you guys deserve more updates. So here’s a brief summary of all the awesomeness I (and others) have been doing, mostly behind the scenes:

  • The update system was rewritten. osu! should now be able to update in the background without running the external updater app (osume). I have a blog post specifically on the reasoning behind this change and a lot more detail written and almost ready to post, so check that out when it appears.

    update now!

  • The whole osu! infrastructure has needed to scale with the increasing user base. Database load was edging closer towards saturation, so I went performed a bunch of software and hardware optimisation with a very fine comb. This involved deploying more read-only slaves, tweaking indices for better write performance, switching table engines of some high-write tables, altering MySQL configuration on the master, re-partitioning some tables, and making more room on SSDs for hot IO paths. This really needs a full post to understand the scale and effort which goes into keeping a system like osu! running (on my own).

    database load

  • To make space on SSDs, I finally took steps to move replay data out of the database. Many of my database servers operate with no HDD storage, which means irregularly accessed replay data was taking up valuable storage space (those replays ain’t small by any means). I tested riak and mongodb, but finally settled with a hosted solution: Amazon S3. This relieves the task of maintenance of storage, and adds redundancy which hasn’t existed to the level it should have until now. As a result, I have also started storing the top 50 replays for every beatmap, rather than top 40.

  • The osu! main menu now has a shiny visualisation! The osu! cookie now glows as your music gets more intense. It also pulses more as the song gets louder.

    shiny!

  • In order to make audio transitions and loading more performant and flexible, I rewrote a large portion of the osu! audio framework. This isn’t yet in a public release, but should be coming soon. Will help decrease load time on song select previews and making the transition into gameplay a lot more smooth. Also allows for multiple audio tracks playing at once, which may be useful for storyboarding.

  • Resolution switching has also received some attention, making switches from fullscreen to windowed quicker and smarter. Handling of borderless window is now done so as its own mode, which won’t overwrite your window resolution settings.

  • The cause of some replays breaking (misses where the player didn’t actually miss) has been found and patched. Unfortunately it was an error with the replay data itself, so some existing replays are going to be broken for eternity. I tried to hack in a fix for them, but it’s just not worth it due to the potential of causing weirdness on other replays. Might be best to just get a list of the remaining broken replays and edit the replay data.

  • I rewrote my localisation toolkit to allow almost automatic string extraction from osu!. An initial test run of it saw a double in the number of localisable strings. Expect this to keep increasing as I get the impulse to make more available. Note that these updated localisations are not currently in the public release due to some show-stopper bugs in other places.

  • I’ve been working quietly on a new version of pp (ppv2) which will make the ranking system a lot more understandable, real-time, and applicable to lower level players. It also shouldn’t jump around as much. I hope to make it completely open so critics can suggest improvements, and they will be applied as necessary. ppv2 has its own processor which is quite a beast, and can handle recalculations of users in real-time without almost no overhead. Getting this live will mark the end of daily pp processing which currently eats a hell of a lot of processing power.

  • I’ve been working on a new beatmap modding infrastructure and generally rethinking the whole process. You can read more here and see a portion of the system in action here. I haven’t had the time to push this out for real-world use yet, but I really want to this month.

  • I have a branch that finally compiles without XNA, and runs under .NET4.5 (or anything in between). The eventual plan is to move forward from .NET2.0, as there are huge performance improvements with the newer releases – many of which will see less “lag spikes” that some people experience. This will mark the death of DirectX support, but don’t worry; OpenGl support will be vastly improved and tested before this happens.

  • Experimental replay scrubbing is available on the test build, but was broken recently with the above audio framework changes. I’ll fix that soon, so jump on test build and give it a whirl if you are curious. This won’t be available on public in the next release, but maybe sometime soon after. When it works ;).

  • The osu!api has gone live, giving developers access to some of the data osu! has built up over the years. It is currently quite minimal, but this is intended as I plan on only adding new API calls which people will find useful to create interesting services and apps. If you want to request any additions to the API, file an issue on GitHub.

  • Disqus is now available on all beatmap pages, allowing for discussion outside the forum (which in the case of ranked maps is not so active). It makes use of SSO (Single Sign-On), meaning you can post using your existing osu! account. Feels very integrated and nice to use. And it’s threaded!

  • A mapping contest has run and ended, currently in the judging stage. While it didn’t go as smoothly as hoped, we to plan on having more in the near future to make up for it.

A few milestones:

  • 10,000 consecutive users peak (5th May).
  • 500,000 active users (10th June).
  • 100,000 likes on facebook (1st July).
  • 1,500,000,000 plays (3rd July).
  • Approaching 3 million registered accounts!

I know I’ve still missed many things from the above lists. Apologies for that; I will strive to get more regular updates up so you get a more granular look at what is going on. osu! is growing at a pretty crazy speed and I’m doing my best to keep up. Hopefully you guys can agree :).

If you’d like me to write about any of the above dot points in more detail, leave a comment! I have a lot of detail I can add, but don’t want to bore you all with technical blabber that no one cares about.

comments

Optimal Database Backups

published 31 May 2013

No one enjoys database backups. They usually involve a load spike and a lot of table locking (even in best-case scenarios) which can be felt on live servers. Some sites bring their services down to perform backups, others slow to a halt. It is a very important aspect of running an online service, and finding an optimal and elegant solution is usually very specific to the infrastructure and nature of services being offered.

I am very serious about keeping live backups. The osu! database is replicated to a slave server, providing a real-time fallback in case the main server happened to fail. This is already ample to handle any server software/hardware issues – for instance if a drive was to fail. This alone does unfortunately leave the possibility of human mistake – where the database is damaged internally – which while I’d love to say doesn’t happen, is generally unavoidable (especially when expanding the size of the team which is working with database access). In the case of a human mistake, both the master and slave server’s data is in a bad state, making slaves of this nature useless.

In the case of human error, my current backup solution was to take database snapshots from the slave server. This results in very minimal effect on the front-facing service, as user based actions rarely require a query to the slave database, but the storage requirements, the IO requirements and the general clunkiness of snapshots has always bugged me. It also means that as backups were only made once a week, in the case we needed to recover data it may be up to seven days old, which is not acceptable.

Incremental snapshots is one way to avoid this pitfall, but does require all database tables to be of InnoDB engine. I regularly test InnoDB (or in this day and age, XtraDB) but am still getting better overall performance with the arguably less reliable MyISAM, so this is not an option.

Introduce a delayed slave to the equation. This is a separate server which is initialised as a slave to the master database, but maintains a time distance from the live data. This is easily done using the Percona Toolkit’s pt-slave-delay, which runs as a daemon and allows specifying a period for which it should delay sql operations by.

There are a few amazing advantages here:

  • There is no added load to any of the live servers, apart from the network overhead of streaming binlogs.
  • It is a continuous backup. You can’t get better than this. No snapshots to worry about; only the assurance that you can always recover.
  • Because binlogs are always sent instantly, this slave instance can also be replayed to any particular point in history within the delayed duration. So if it is running 24 hours behind by default, it could be asked to catch up to 12 hours, or even removing the delay – making it a potential real-time backup slave in case of failures.
  • If you already have replication setup, initialising the new slave can be done with zero front-facing impact by using an existing slave as the point of initialisation.

To initialise the pt-slave-delay command, it’s as simple as ensuring replication is started, then specifying the delay and check interval. I am currently using the following, which should be run at system startup if you want it to persist:

#!/bin/sh
mysql -e 'start slave;'
pt-slave-delay --delay 24h --interval 5s --no-continue localhost

Take note that while replication is stopped, you will now be able to see how many seconds behind the server is using the SHOW SLAVE STATUS command. As I regularly use this for monitoring the slave delay, I had to use an alternative method to find the delay. For me, the easiest way was to select the MAX(timestamp) from a table with high activity and compare this to CURRENT_TIMESTAMP as follows:

SELECT UNIX_TIMESTAMP(CURRENT_TIMESTAMP) - UNIX_TIMESTAMP(max(`date`)) AS seconds FROM `osu`.`osu_scores`;

I really enjoy databases and the optimisation of them at low and high levels. osu! is still relatively simple when it comes to database infrastructure but it is rapidly expanding. Keeping up with the increasing load is an interesting and very fun process. I hope to post more articles like this delving into the slightly more technical side of things going forward.

Update:

I just found out that as of MySQL5.6 (which I am actually running, so have switched to this method) you no longer need the pt-slave-delay script as this is built-in functionality. You can add a delay with one simple command (make sure to STOP SLAVE; first):

CHANGE MASTER TO MASTER_DELAY = 14400; --delays 4 hours

SUPER IMPORTANT NOTE: If you are delaying further back than the master has stored in binary logs, running a CHANGE MASTER TO like this will cause the world to fall apart, as it resets all slave relay logs. Make sure to carefully read the documentation – specifically:

CHANGE MASTER TO deletes all relay log files and starts a new one, unless you specify RELAY_LOG_FILE or RELAY_LOG_POS. In that case, relay log files are kept; the relay_log_purge global variable is set silently to 0.

p.s. I haven’t forgotten about the “this week in osu!” series, but some of the things i planned on writing about have been lost in my forgetful mind. I’ll try and knock one out along with the next public release, which I am hard at work on getting finalised. I am trying to livestream as much as I can, so if you are interested in the development of osu!, make sure to tag along and say hi in chat :).

comments