Thursday, April 7, 2016

Rant: Network Encryption

Network encryption is a BITCH of a headache.  Transmitting a key over a network in a secure encrypted way and ensuring the key matches on both Client and Server side was the easiest part.

It goes like this..
  1. Client connects to Server with public key attached as a nice way of saying "HI!"
  2. Server sends handshake request message with encrypted shared key.
  3. Client sends handshake response which means "Thanks..."
  4. Client now has shared key for encryption of packets
  5. Server sends a chat message with random gibberish that only a baby would understand
  6. Client NEVER receives it and no error is thrown..
  7. Frustration begins..
Obviously the issue should be with the Client not recognizing that message because it isn't properly decrypted, right?  Nope, both Client and Server are using the shared key and encryption/decryption is working and has been checked.  So, where to go from there..  TBD.

Thursday, January 28, 2016

Transfering Files over Network? Sure why not...

RECAP: In the last post I went into the Network portion and how it had become Beta.  The upgrade in status was a result of hitting a goal of stability and bandwidth consumption protection that I felt was a extreme priority.  There is enough customization to the way network messages are handled that getting the most out of bandwidth won't ever be an issue.  Thus onto the next issue I needed to deal with...

Turns out that transferring files over a network wasn't as hard as I thought it would be.  If it wasn't for RandomAccessFile in Java, the performance would've been terrible and there would have been a need for lots more bubble wrap code (bubble wrap = exception handling).  The key to bandwidth friendly file sharing is to not send the ENTIRE file over the network all at once.  Sure, you might have a great internet connection, but, what if you are streaming your favorite movie off of your legal streaming service and all of a sudden that stream comes to a halt because my bandwidth hungry program is taking up all your streaming power?  Well... it is simple really.  Send the file as separate data chunks that are reliable, bandwidth friendly and don't overwrite data from other chunks.  Happy to say I did just that.  Not only can it send the file chunks in a locked limit of kilobytes but it can do it without caring about being in order.

Normally, when you read a file it reads the same way you would read a book.  You can create the file chunks to send over the network that way if you want to load up all those chunks in memory and immediately send them all in one go.  Can most computers support it?  Sure.  Is it a good programming paradigm to consume all your user's computer RAM on their operating system just to make your life easier?  No.

In short (because this is already too long), the File Request is received by the Server.  Server sends an ACK for that message and pulls a chunk off the File starting at the first position in the file then sends it to the Client.  Client gets the chunk and sends a File Request again back to the Server but this time the File Request has extra information to help the Server know where to begin with the next chunk.  Now if those messages go out of order the integrity of the file is maintained because the File Request has enough information that the Server can get the right chunk back to the Client.

To test the integrity of the files I implemented a Entity World serializer and made a Entity World with 10,000 entities (~1.4 megabytes of disk space used).  Had that world saved to disk and then I sent one File Request to the Server with the output file being set to a different name then the source file.  The expected result was to load that output file sent over the network into a new Entity World state and compare the Entity amount to the original.  If the Entity amount was the same then the transaction integrity was sustained.  The sizes did match and the file share was flawless.  No exceptions, no redundant messages due to optimized ACKing and one duplicated Entity World on my disk drive waiting to be deleted.  Wait!  If I delete a whole Entity world is that a form of Digital Genocide?  Oops...

 Confession: Most of the times I post on here, the so called "updates" are old and are posted a little bit after the fact due to either because I was coding until I passed out or I got distracted by life.  The late updates that are posted on here as if they just happened won't change.  At least now you know that this blog isn't my primary focus, getting the engine done is.

Saturday, January 9, 2016

Finally! Network Portion is Beta

Networking is finally solid enough to warrant it as beta.  It has been grueling and many challenges arose during the design.  Some challenges I had to deal with were:
  • Design the Network side to be easily maintainable and enough room to incorporate per connection encryption that would secure logins and other important messages.  Pretty much enough configuration of the core protocol without risking message reliability.
  • Protocol: TCP or UDP?  I use both but since TCP is pretty reliable then the design had to allow messages to be classed as Reliable or Unreliable, unreliable would go through the UDP channel.
  • Dropped Messages: The challenge here was ensuring messages do arrive even if they are a little late.  The answer was to use a ACK message that was compact to conserve bandwidth but had enough information to allow the Sender to know which message it was referring to.
  • Synchronicity: Synchronized messages would need to be self-aware across two different machines whether they were the newest message or not.  To do that I created a sort of backlog of network traffic on both ends that would be reviewed by the Receiver when a message arrived to determine if that message was the newest or not.  If it wasn't then it was dropped by the Sender and no other messages would be sent.
  • Bandwidth Concerns:  It isn't a good idea to just throw everything out there on the network without a care about the user's bandwidth.  To address that I had to design a limiter that would only allow a certain amount of data per second and if that limit was hit then it would pause outgoing messages to be sent later.  Some messages ignore this since they are important and small enough to be sent.  The limiter would be set to a reasonable amount that allows important messages to be sent without hogging the bandwidth.  Another in progress feature of the limiter is to make it dynamically able to increase or decrease bandwidth based on history.  That feature isn't priority since tests are done locally but it is on the TODO.
  • Message Serialization:  The serialization portion is a little tedious in a way because you pretty much have to code in what fields of an object are written and read for every message.  The good thing about that is messages can be fine tuned to use as little bandwidth as possible.  Extra code versus more bandwidth usage, I'll take extra code any day.
  • Latency Detection:  This was a fairly easy challenge because all it required to be done was to simulate latency for tests.  Lets just say that my expectations were satisfied due to all of the above.  Certain messages also attach a timestamp that is compressed before being sent out so that the Receiver can use it to interpolate movement, as an example, and with a backlog of previous messages the interpolation is quite easy to determine.
The graphics portion is next.  I dropped it due to conflicting priorities but now it will be addressed.  All these troubles for a Space Station 13 like game.  I love that game.