Thursday, January 28, 2016

Transfering Files over Network? Sure why not...

RECAP: In the last post I went into the Network portion and how it had become Beta.  The upgrade in status was a result of hitting a goal of stability and bandwidth consumption protection that I felt was a extreme priority.  There is enough customization to the way network messages are handled that getting the most out of bandwidth won't ever be an issue.  Thus onto the next issue I needed to deal with...

Turns out that transferring files over a network wasn't as hard as I thought it would be.  If it wasn't for RandomAccessFile in Java, the performance would've been terrible and there would have been a need for lots more bubble wrap code (bubble wrap = exception handling).  The key to bandwidth friendly file sharing is to not send the ENTIRE file over the network all at once.  Sure, you might have a great internet connection, but, what if you are streaming your favorite movie off of your legal streaming service and all of a sudden that stream comes to a halt because my bandwidth hungry program is taking up all your streaming power?  Well... it is simple really.  Send the file as separate data chunks that are reliable, bandwidth friendly and don't overwrite data from other chunks.  Happy to say I did just that.  Not only can it send the file chunks in a locked limit of kilobytes but it can do it without caring about being in order.

Normally, when you read a file it reads the same way you would read a book.  You can create the file chunks to send over the network that way if you want to load up all those chunks in memory and immediately send them all in one go.  Can most computers support it?  Sure.  Is it a good programming paradigm to consume all your user's computer RAM on their operating system just to make your life easier?  No.

In short (because this is already too long), the File Request is received by the Server.  Server sends an ACK for that message and pulls a chunk off the File starting at the first position in the file then sends it to the Client.  Client gets the chunk and sends a File Request again back to the Server but this time the File Request has extra information to help the Server know where to begin with the next chunk.  Now if those messages go out of order the integrity of the file is maintained because the File Request has enough information that the Server can get the right chunk back to the Client.

To test the integrity of the files I implemented a Entity World serializer and made a Entity World with 10,000 entities (~1.4 megabytes of disk space used).  Had that world saved to disk and then I sent one File Request to the Server with the output file being set to a different name then the source file.  The expected result was to load that output file sent over the network into a new Entity World state and compare the Entity amount to the original.  If the Entity amount was the same then the transaction integrity was sustained.  The sizes did match and the file share was flawless.  No exceptions, no redundant messages due to optimized ACKing and one duplicated Entity World on my disk drive waiting to be deleted.  Wait!  If I delete a whole Entity world is that a form of Digital Genocide?  Oops...

 Confession: Most of the times I post on here, the so called "updates" are old and are posted a little bit after the fact due to either because I was coding until I passed out or I got distracted by life.  The late updates that are posted on here as if they just happened won't change.  At least now you know that this blog isn't my primary focus, getting the engine done is.

3 comments :

  1. This is good news...
    // Message 9531 got dropped!
    SimulatedLossLagListener : BulkEntityUpdateMessage(9531) was lost in translation...
    // Server detects that message wasn't sent based on history, so it sends it
    [Connection 1] Tx: BulkEntityUpdateMessage[id=9531,isNewest=false,ReliableOrdered]
    // Oh no! Another message was dropped
    SimulatedLossLagListener : BulkEntityUpdateMessage(msgID=9530) was lost in translation...
    // Sever detects and sends it
    [Connection 1] Tx: BulkEntityUpdateMessage[id=9530,isNewest=false,ReliableOrdered]

    [Connection 1] Tx: BulkEntityUpdateMessage[id=9529,isNewest=false,ReliableOrdered]
    // Showing off that ids are unique to the type of message
    SimulatedLossLagListener : BulkEntityRemoveMessage(msgID=1) was lost in translation...
    // Server detects and sends
    [Connection 1] Tx: BulkEntityRemoveMessage[id=1,isNewest=false,ReliableOrdered]

    The above pretty much shows that messages are getting to the client even if they are dropped in the simulation. Also, this is under up to 700ms of lag which is pretty bad.

    ReplyDelete
  2. BulkEntityMessages average to 3.4kB per message. If your download rate is 300kB per second (That was normal for me when I lived off in the country) then you can get 25 bulk entity updates per second or 250 entities sent to your machine per second. This isn't optimized either. Right now it sends a component attached to an entity even if there was no update to the component's data. :) There will be tests later for when it is optimized and optimized it will be...

    ReplyDelete
    Replies
    1. Keep in mind this is Kilobits we are talking about. Networks operate off of a different metric then normal data on your computer. One megabit is only 125 kiloBytes of data. ISP's often fool consumers with that metric because most people don't realize the difference and there is a significant one so they think a huge Megabit is equal to a MegaByte. If an ISP says you are going to get 5 megabits per second with what you pay you are actually getting 625 kiloBytes per second of downloadable data. Sucks don't it.

      Delete