Jump to content

[Solved] Plastic5 slow LAN performance?


Andy22

Recommended Posts

Hi,
 
i did evaluate Plastic5 and several DB backends for useage in a game project environment, particular Unity.
 
The special properties in this scenario are:
 
1) Nearly no concurrent DB/Plastic client connection's needed. In reality only one or two client's are accessing/working on the server at any given time.
2) Plastic/DB needs to handle medium and large binary files of all sort's (psd files, meshes, music files, max files, movies) some can be compressed fairly well, other not at all.
3) Unity creates special .meta files for every single asset/codefile in the project, this means the number of files doubles and those meta files are very small <1k separate files, that add extra workload.
4) The database will get very large if assets are also put under Plastic 10-60GB.
5) Large parts of the binary files may frequently change, so even a small change in a 400 MB psd file, will result in a complete re-upload? (Can plastic save/handle block changes in binary files, like dropbox?)
6) Low number of actual codefiles 300-1000, that need all the DB/SCM advanced features like branching, versioning ....
7) In practice biggest chunk are assest's that would only need versioning/replication.
 
 
Here are my benchmarks and problems so far:
Testsystem Client:
16GB Win7 x64, i7 with a 64GB SSD (~400 MByte read/write)
 
Testsystem Server:
4GB dedicated server running Win7 x64 (We did also test Win 2012 R2 Server, but same results.)
Gigabit Lan with 9k MTU and ~100-120 MByte/s up/down to the server via SMB, we can reach 99.8% network speed.
In all tests the CPU/RAM usage never gone over 70% usage, so it should not be CPU/Ram limited.
 
 
 
Test Set1 client->Server checkin of (4,5 GB, 33k files):
(Tested on a 1TB HDD, that delivers 120 MByte/s read/write)
 
native copy over SMB:                             3:30min
SQLite .net adapter (1.0.65)                    3:22min
SQLite .net adapter (1.0.88)                    3:23min
SQL CE 4                                                 4:45min
SQL Express 2012                                   4:20min
Firebird Embedded 2.1                            4:40min      (upload done at 3:30, remaining time for "Confirming checkin operation")
Firebird Server 2.5.2                               4:40min      (upload done at 3:30, remaining time for "Confirming checkin operation")
postgresql 9.3.2-3                                   *none         (Test stopped after 10mins, ~40% upload done)
MySQL 5.6.15.0                                      *none         (Test stopped after 10mins, ~60% upload done) (Tested with Plastics tips for MySql and with default dev/server machine config)
 
 
Results: Both "big" opensource DB's performed horrible, no matter what settings we tried. We assume there is a problem running those under windows at those conditions, for binary files? Firebird performes the upload like SQLite, but has a final strange "hiccup" for "Confirming checkin operation"? The other DB's performed at filesystem speed, which was limited by lots of small file writes (meta).
 
 
Test Set2 client->Server checkin of (single 2GB, precompressed file):
(Tested on a 128GB SSD, 570 MByte/s read and 480 MByte/s write, changing filesystem (NTFS/exFAT/FAT) and cluster size (4k, 16K, 32K) did not matter)
 
native copy over SMB                                                       ~110 MByte/s
Firebird 2.5.2 async/sync, SQLite, SQL CE 4                   ~10-20 MByte/s  = 10-25% of total network speed
 
So Plastic or the DB is only able to reach 15-25% of total available network bandwidth?
 
 
So my question is: Is this a expected behavior or can we do something to improve this figure? I did not expect to get SMB copy performance, but hoped for like 40-60 MByte/s for a single large file.
 
Maybe can someone test what speed they achieve in there GBit LAN's just checking in a single large 2 GB file?
 
 
Thx Andy

Link to comment
Share on other sites

Just tested Plastic5 with client/server on the same powerfully client machine with a fast SSD. Even in this ideal scenario for a single file, Plastic is limited to 25 Mbyte/s?

 

Whats the limiting factor here? Is it some obscure TCP or buffer problem any tips are welcome!

Link to comment
Share on other sites

  • 2 weeks later...

Hi Andy,

 

Reviewing your tests, the first one doesn´t look bad, right? And regarding the MySQL issue, probably is related to some configured parameters. Did you change the data packet size?

 

Regarding, the second one, there is no parameter nor a Plastic limitation to 25 Mbytes/s, but can you post the output of "cm iostats" command?  I will figure out if there is something strange...

 

Regards,

Carlos

Link to comment
Share on other sites

Hi Andy,

 

Reviewing your tests, the first one doesn´t look bad, right? And regarding the MySQL issue, probably is related to some configured parameters. Did you change the data packet size?

 

Regarding, the second one, there is no parameter nor a Plastic limitation to 25 Mbytes/s, but can you post the output of "cm iostats" command?  I will figure out if there is something strange...

 

Regards,

Carlos

 

Yes, i did always use the 10M packet size under MySQL for all tests.

 

Here are the "cm iostats" using the HDD, if needed i can do the same for a local server on SSD, but i suspect its not a problem with the storage speed.

 

 

Test done on the server itself client+server:

 

Server

    Performing network tests with server: localhost:8087. Please wait...

    Upload speed   = 432 Mbps.     Time uploading  16MB = 296ms.      Server: localhost:8087

    Download speed = 1360 Mbps.  Time downloading  16MB = 94ms.       Server: localhost:8087

    Performing disk speed test on path: C:\Users\Admin\AppData\Local\Temp\PlasticSCM_IOStats. Please wait...

    Disk write speed = 86 MB/s.   Time writing  512MB = 5943 ms.

    Disk read speed  = 840 MB/s.  Time reading  512MB = 609 ms.

 

 

Test done on the Win7 client to the Win7 server:

 

    Performing disk speed test on path: C:\Temp\PlasticSCM_IOStats. Please wait...

    Disk write speed = 63 MB/s.   Time writing  512MB = 8096 ms.

    Disk read speed  = 1171 MB/s. Time reading  512MB = 437 ms.

 

 

Firebird Server

Performing network tests with server: SERVER:8087. Please wait...

Upload speed   = 384 Mbps.     Time uploading  16MB = 328ms.      Server: SERVER:8087

Download speed = 512 Mbps.   Time downloading  16MB = 249ms.      Server: SERVER:8087

SQLite:

Performing network tests with server: SERVER:8087. Please wait...

Upload speed   = 384 Mbps.     Time uploading  16MB = 327ms.      Server: SERVER:8087

Download speed = 680 Mbps.   Time downloading  16MB = 188ms.      Server: SERVER:8087

 

 

Seems this test ignores what DB backend is used.

 

So the 384Mbps upload speed seems strange, but should still allow ~40MByte/s instead of 25.

We have 9k Jumbo frames enabled and SMB works at 99% bandwidth to and from the server for our GB LAN, ~115MByte/s real up/down copy.

 

Any more tweaks i can try?

 

Thx

Andy

Link to comment
Share on other sites

Here are the data for a fresh install and having client+server on the same powerfully i7 16GB ram,  SSD machine.
 

 

Performing network tests with server: localhost:8087. Please wait...
Upload speed   = 1024 Mbps.    Time uploading  16MB = 125ms.      Server: localhost:8087
Download speed = 4128 Mbps.  Time downloading  16MB = 31ms.       Server: localhost:8087

Performing disk speed test on path: C:\Temp\PlasticSCM_IOStats. Please wait...
Disk write speed = 157 MB/s.  Time writing  512MB = 3260 ms.
Disk read speed  = 1641 MB/s. Time reading  512MB = 312 ms.

 

 

This still results only in a average checkin speed of ~15-25MByte/s using the default MS CE edition. This is for a empty DB and just one single precompressed 2.2GB zip file checkin, btw SQLite/Firebird performs similar.

 

So something is limiting Plastic here, like i already asked: Can anyone confirm to go beyond this speed-limit on there own machine for this checkin scenario?

 

Thx

Andy

Link to comment
Share on other sites

Just tested 5.0.44.534 with a pre-compressed 1.8 GB single zip file, took 94 seconds to checkin with the fastest DB option (SQLite). Server/Client are on the same machine, using a fast SSD.

 

Thats just ~20 MB/s.

 

On the other hand updating the workspace after deleting this file, took just 26 seconds for ~72 MB/s. So any clue why Plastic is so much slower on checkin and how this can be speed-up?

 

Thx

Andy

Link to comment
Share on other sites

Hi Andy,

 

we need to carry out the same test locally but take into account that during the commit operation the information is compressed and might take longer than the decompress op at the update operation.

 

If you want you can run the following test I ran last week so we can compare....

 

Download the following content into a new workspace working with a new repository.

 

1) svn co svn://svn.icculus.org/alienarena/trunk

2) git://github.com/torvalds/linux.git

3) git://github.com/mono/mono.git

4) svn co  https://svn.apache.org/repos/asf/openoffice/trunk

5) wget http://downloads.sourceforge.net/project/warzone2100/warzone2100/Videos/high-quality-en/sequences.wz?r=http%3A%2F%2Fwz2100.net%2Fdownload&ts=1367833853&use_mirror=garr

 

You should have a workspace with 221K files, 13K directories and 8.1GB

Times are in ms.

 

post-112-0-20081900-1393867049_thumb.png

 

I ran the tests at Amazon EC2.

 

Amazon instance i2.xlarge (SSD HD) Values in ms SqlServer 2012 as Plastic SCM backend (Trial license) 221K files 13K dirs 8,08GB Plastic SCM version: BL533

 

Tell me how it performs for you!

Link to comment
Share on other sites

I will try this, but your results avg. to around ~32MB/s.

 

Is there a way to disable compression or change any buffer/packet size for the plasticd?

 

Don't get me wrong 25-35 MB/s is still workable and still faster than svn, perforce also seem to perform at plastic speed's for large binary files. I just wonder what the bottleneck is here, since i don't see the CPU/Network/Memory/SDD maxing out even remotely.

 

thx

Andy

Link to comment
Share on other sites

Hi Andy,

 

we need to carry out the same test locally but take into account that during the commit operation the information is compressed and might take longer than the decompress op at the update operation.

 

I think i found the "speed limiter", i just tryed to gzip the pre-compressed testfile using 7zip's gzip/normal profile. It matches exactly 25MB/s....

 

So how can i disable this compression completely to confirm my suspicion and is it possible to configure plastic so it only compresses certain file types?

What compressor is used? Is it a zlib net wrapper or the zlib.dll i see?

 

 

Assuming zlib compression:

 

Compression might improve speed for code/text files over network, but actually slows down performance for large hard to compress binary files, also is there a option to use a other compressor, since gzip/zlib is really slow compared to lzo/LZ4/quicklz/lzturbo?

 

The problem is zlib's compression speed is around 20-50MB/s, while decompression speed is also just 60-200 MB/s, which means zlib may limit a SSD/HDD based system. In our case we rather use ZFS + LZ4, NTFS compression, BRTFS + lzo or none at all, since space is cheap. U also need a pretty good CPU (i7), to even get slightly over 25MB/s compression speed for zlib in our cases.

 

 

Thx

Andy

 

PS: Here is a example: https://sites.google.com/site/powturbo/home/benchmark  and http://www.quicklz.com/bench.html that illustrates the compression speed problem for zlib.

Link to comment
Share on other sites

Oki i could confirm that zlib is at fault here, i compiled a zlib64.dll with "#define FASTEST" and statically overriding compress2() to always use "Z_BEST_SPEED", i could not test "Z_NO_COMPRESSION", which resulted in a client error message.

 

Results:

plastic  zlib64.dll = 25-30 MB/s     (i guess Z_DEFAULT_COMPRESSION or Z_BEST_COMPRESSION?)

custom zlib64.dll = 55-60 MB/s     (FASTEST + Z_BEST_SPEED)

 

So disabling compression would be a huge gain in our cases for binary files, but ofc the best option would be a settable compression level (none, speed, max) per filetype. I would also suggest to look into one of the faster compression libs, like quicklz, lzturbo or lzo.

 

I wasn't expecting that Plastic is limited by compression/decompression speed of zlib, rather than hdd/sdd or network bandwidth.

I guess this should be fixable quite easily?

 

 

thx

Andy

 

 

PS: Any suggestions what to-do now? Using my custom zlib.dll seems to work fine, but i'm not fully aware of the consequences regarding the database backend.

Link to comment
Share on other sites

Wow! Thanks for the test! this is a very interesting topic!!

 

I have some questions and some answers.

 

== Questions ==

 

native copy over SMB:                             3:30min

 

1) This native copy operation was for all the files one by one or it was a single compressed file containing all the files?

 

2) Are the files set more binary than text? Which compression ratio we have for the data set?

 

== ANSWERS ==

 

plastic  zlib64.dll = 25-30 MB/s     (i guess Z_DEFAULT_COMPRESSION or Z_BEST_COMPRESSION?)

 

 

We are only using "Z_BEST_COMPRESSION", I'm eager to study and test your change. I'll keep you posted.

 

== ACTIONS ==

 

1) I need you to perform the test again but now enabling the cm log (http://www.plasticscm.com/infocenter/technical-articles/kb-enabling-logging-for-plastic-scm-part-i.aspx) make sure you enable the "Performance" logger since that one is going to give us the time spend compressing... generating hashes.... reading files... and so on.

 

2) We are going to prepare a new release (Today) only for you with the compression disabled, let's check then how the operation performs. I'll contact you for this.

3) We are going to prepare another release where you will be able to compress the files by filetype.

4) It's in our roadmap to implement a better upload of big data blocks.

 

 

I think we can improve a lot the speed!! Thanks for spark!

Link to comment
Share on other sites

Hi,

thx for the update, looking forward to this version!

First the 3:30min test was done by simply copying SMB file by file, which is limited by the amount of small files involved. This test did not include all the big binary files, that are hard to compress. I wanted to split the speed test in 2 category, the first (3:30) was for programmers with lots of code, meta and mixed binary files.

The second test, where plastic is quite limited was for artists, which involves mainly psd, max and other files, which often are already pre-compressed and single checkin operation size can get really big 10GB+.

I'm not this concerned how plastic handles the first (code + mixed bins) case, but how well it performs on large game assets.

So for the "worst" binary case, i'm now simply using a single pre-compressed 2 GB zip file.

So far i could roughly double my checkin speed using the modified zlib64.dll on my private and work machine. I wanted to conduct your mass source test also using my zlib64 and see if it also can speedup this testcase.

Also don't forget that "green IT" is a common goal these day's, this means the plastic server might run on a low power, culv cpu! In such a case the server might not deliver enough decompression speed using zlib, other compression libs (lzo) are much more optimized for low cpu usage, high decompression speed.

Thx

Andy

PS: I really think u should aim for a compression lib that can deliver 500-800 MB/s compression/decompression speed, so plastic is network/ssd limited again. Its not totally uncommon these days to see 10Gbit LAN setups, since the adapters got cheaper (http://www.intel.com/content/www/us/en/network-adapters/converged-network-adapters/ethernet-x540.html) and 10Gbit will continue to grow out of the server market.

Link to comment
Share on other sites

Hi!

 

yes, we'll be reviewing all this area of compressing the information, we can clearly improve the speed by changing it.

 

Here you have the "custom" release: www.plasticscm.com/externalcontent/releases/PlasticSCM-5.0.44.534-windows-installer.exe

It has the compression deactivated (you'll see it inside the "Performance" log).

Link to comment
Share on other sites

Hi!

 

yes, we'll be reviewing all this area of compressing the information, we can clearly improve the speed by changing it.

 

Here you have the "custom" release: www.plasticscm.com/externalcontent/releases/PlasticSCM-5.0.44.534-windows-installer.exe

It has the compression deactivated (you'll see it inside the "Performance" log).

Just did a quick tests and here are the results:

I used the (svn co svn://svn.icculus.org/alienarena/trunk) as a valid representation of our own game project, the mix of files seems about right. The second test is just a single 1,8 GB pre-compressed file.

Test1:

svn co svn://svn.icculus.org/alienarena/trunk

alienarena (1,44 GB) 6.040 Files, 323 Folders

Client(SDD) -> 1GBit Lan -> Server(HDD) (SQLite)

checkin only:

1) 46 sec ~32,0 MB/s (org. Plastic5 zlib64, Z_BEST_COMPRESSION)

2) 26 sec ~56,7 MB/s (custom zlib64, FASTEST + Z_BEST_SPEED, gcc none asm compile)

3) 25 sec ~59,0 MB/s (no compression Plastic5 version)

NOTE: The database sqlite file size using my custom zlib 2) was only 5% larger, compared to Plastics org. Z_BEST_COMPRESSION.

Test2:

single 1,83 GB pre-compressed zip file

Client(SDD) -> 1GBit Lan -> Server(HDD) (SQLite)

checkin only:

1) 98 sec ~19,2 MB/s (org. Plastic5 zlib64, Z_BEST_COMPRESSION)

2) 49 sec ~38,4 MB/s (custom zlib64, FASTEST + Z_BEST_SPEED, gcc none asm compile)

3) 36 sec ~52,0 MB/s (no compression Plastic5 version)

4) 53 sec ~35,5 MB/s (custom zlib64, FASTEST + Z_BEST_SPEED, gcc asm compile)

NOTE: 4) shows that the old 32bit assembler version does actually reduce speed on modern x64, compared to a none asm zlib compile.

 

 

Conclusion: Switching to a faster zlib64 or no compression performed better than the default compression in both cases. We still only reach about ~50% network bandwidth, but this could  actually be the DB backend + writeback logic on the server.

 

 

bye

Andy

 

 

PS: I assume this special build is not rdy for production or can we use this build and still upgrade to future versions? What about using our custom zlib version, this should be compatible, since its hidden inside the zlib logic?

Link to comment
Share on other sites

Hello Andy,
 
First of all, I'd like to point out that actually the current compression mode used by Plastic is Z_BEST_SPEED.
 
I've been running some zlib tests as you did and I'd like to share the results with you. But before, I'd like to explain a little how the data uploading process works in Plastic.
 
The uploading process uses a pipeline approach. It means, there are four threads involved in the data uploading process with different functions (of course, the are simultaneously working):
  • Read the disk content in chunks
  • Hashing the files content
  • Compressing the data chunks
  • Uploading the compressed data to the server.
The output of one thread are the input for the following. This way, the usage of the different system resources is optimized.
 
Plastic logs the time of each pipeline stage and also the total time (which, of course, it's not the sum of the stage times). This is important to see how actually the compression time is affecting to the global time.
 
My testing enviroment is:
  • server & client in the same machine (Windows 8 x64)
  • sqlite as database backend.
  • HDD (client & server)
  • plastic release: 5.0.44.534
 
I run three kind of checkin operations:
  • Text file of 700MB (log.txt)
  • Compressed video file of 700MB (movie.avi)
  • alienarena workspace (6K files, 1.44GB)

with differen zlib configurations:

  • without compression
  • current zlib library (1.2.3)
  • latest zlib library (1.2.8)
  • latest zlib library + FASTEST compilation flag.
 
SINGLE FILES - RESULTS
 
File: log.txt - No compression
  UploadData   ReadFileContent: 7777 ms
  UploadData   CompressFileContent: 0 ms
  UploadData   SetRevisionData: 41016 ms
  UploadData   CalcHashCode: 1439 ms
  Total time uploading data 41047 ms

File: movie.avi - No compression

  UploadData   ReadFileContent: 10751 ms
  UploadData   CompressFileContent: 0 ms
  UploadData   SetRevisionData: 28608 ms
  UploadData   CalcHashCode: 1424 ms
  Total time uploading data 29062 ms

File: log.txt - Current zlib library

  UploadData   ReadFileContent: 8860 ms
  UploadData   CompressFileContent: 2641 ms
  UploadData   SetRevisionData: 654 ms
  UploadData   CalcHashCode: 1516 ms
  Total time uploading data 9015 ms
File: movie.avi - Current zlib library
  UploadData   ReadFileContent: 4691 ms
  UploadData   CompressFileContent: 26656 ms
  UploadData   SetRevisionData: 25434 ms
  UploadData   CalcHashCode: 1372 ms
  Total time uploading data 30812 ms
File: log.txt - Latest zlib library
  UploadData   ReadFileContent: 8704 ms
  UploadData   CompressFileContent: 2678 ms
  UploadData   SetRevisionData: 797 ms
  UploadData   CalcHashCode: 1528 ms
  Total time uploading data 8765 ms
File: movie.avi - Current zlib library
  UploadData   ReadFileContent: 6111 ms
  UploadData   CompressFileContent: 26560 ms
  UploadData   SetRevisionData: 28108 ms
  UploadData   CalcHashCode: 1539 ms
  Total time uploading data 34000 ms
File: log.txt - Latest zlib library + FASTEST
  UploadData   ReadFileContent: 8875 ms
  UploadData   CompressFileContent: 2171 ms
  UploadData   SetRevisionData: 3766 ms
  UploadData   CalcHashCode: 1468 ms
  Total time uploading data 9719 ms
File: movie.avi - Latest zlib library + FASTEST
  UploadData   ReadFileContent: 8597 ms
  UploadData   CompressFileContent: 16277 ms
  UploadData   SetRevisionData: 33656 ms
  UploadData   CalcHashCode: 1689 ms
  Total time uploading data 35000 ms

ALIEN ARENA RESULTS

 
Alien Arena Workspace - Current zlib library
  UploadData   ReadFileContent: 25610 ms
  UploadData   CompressFileContent: 30939 ms
  UploadData   SetRevisionData: 22271 ms
  UploadData   CalcHashCode: 4280 ms
  Total time uploading data 42890 ms

Alien Arena Workspace - Latest zlib library

  UploadData   ReadFileContent: 28588 ms
  UploadData   CompressFileContent: 30079 ms
  UploadData   SetRevisionData: 23066 ms
  UploadData   CalcHashCode: 4046 ms
  Total time uploading data 52265 ms

Alien Arena Workspace - Latest zlib library + FASTEST

  UploadData   ReadFileContent: 35189 ms
  UploadData   CompressFileContent: 20685 ms
  UploadData   SetRevisionData: 23710 ms
  UploadData   CalcHashCode: 3731 ms
  Total time uploading data 51000 ms
 
The conclusions in my environment are:
  • there are not differences between the zlib 1.2.3 and 1.2.8
  • the compression time is considerably better with the FASTEST compilation flag.
  • the compression time is not too relevant in the total update time.
 
They are very different in yours, and both right  :).
 
We can run a live session with you to check why the compression is so important in your testing environment. If you agree, please, write us to support@codicesoftware.com and we'll contact you to arrange the meeting.
 
You can use your compiled zlib library because it's compatible with the server one, so it doesn't matter.
 
You also can use the no-compression version because the data is stored in the database with the type of compression used. This way, the data uploaded (with this version) will be stored with compression-type = none and everything will just work.
 
Best regards,
    Rubén.
Link to comment
Share on other sites

They are very different in yours, and both right  :).

 

We can run a live session with you to check why the compression is so important in your testing environment.

Hi ruben,

i'm at home now, so cant give correct results.

First off thx for takeing the time to look into this, but something looks really strange in your results.

1) The "no compression" Alien Arena data seems missing?

2) Going by your data the latest/fastest zlib slowed upload by 10 seconds for alien arena, compared to default? That should not be possible, so the data must be wrong or miss-labeled somehow.

3) In the avi and Alien Arena test u never break 25 MB/s, no matter what settings. Those values contradict my tests, so something is very wrong here.

We should ask the question why u are also bound by the "magic" ~20-30 MB/s barrier for mixed mode binary files, even on a local system? I have to double check my test results now, since our tests differ so much.

Can u upload your avi testfile, txt file and your Alien Arena set (as zip), so we have the exact same fixed test data from here on?

I will try to give u the same plastic performance logs and we can work from there on.

Can u also post you SQLite connection string, what MTU size u use and what is your NTFS cluster size, check if NTFS compression is disabled?

 

Also what is the meaning of the "SetRevisionData"  part in the numbers? Why it takes as long as compression, is it the actual upload to the server?

 

bye

Andy

PS:

HDD (client & server)

 

Can u do a quick check on a SSD system, just to make sure u are not HDD io bound?

Link to comment
Share on other sites

Here are the results for Alien Arena using my private gaming pc (Win7 x64 + SSD), with client server on the same machine:
NOTE: I also did a reboot before each test and deleted the sqlite db, so all start with a "cold" filecache!

Also note that those values are with server/client operating on the same physical SSD (samsung 470), so not ideal!

 

Plastic current zlib

UploadData - ReadFileContent: 12219 ms
UploadData - CompressFileContent: 26771 ms
UploadData - SetRevisionData: 6289 ms
UploadData - CalcHashCode: 3797 ms
Total time uploading data 26879 ms

Plastic no compression

UploadData - ReadFileContent: 15460 ms
UploadData - CompressFileContent: 15 ms
UploadData - SetRevisionData: 16910 ms
UploadData - CalcHashCode: 4388 ms
Total time uploading data 17160 ms

Plastic fastest zlib

UploadData - ReadFileContent: 12722 ms
UploadData - CompressFileContent: 15677 ms
UploadData - SetRevisionData: 7302 ms
UploadData - CalcHashCode: 4288 ms
Total time uploading data 15990 ms

So we see a 42% speed improvement using the faster zlib.

 

Those results are similar to the work pc data, the fastest compression performs a little better compared to no compression, since this set actually has a decent compression ratio.

So with a faster SSD the values will be even more bound by zlib compression/decompression speed.

 

This also would indicate that your test system is limited by hdd speed probably?

 

 

bye

Andy

Link to comment
Share on other sites

Out of curiosity i did a reference test using my 2 SSD's, samung 470 as source read and samsung 830 as write SSD, with server/client on the same machine.

 

Plastic no compression(Alien Arena)

UploadData - ReadFileContent: 10082 ms
CompressFileContent: 16 ms
UploadData - SetRevisionData: 11763 ms
UploadData - CalcHashCode: 4175 ms
Total time uploading data 11841 ms

Thats around 125 MB/s which looks reasonable, if compared to what the samsung 830 can write, while still using SQLite instead of a raw copy.

Keep in mind that the samsung 830 is already 18 months old and discontinued. So i bet u can get to around 150-200 MB/s with a current gen. SSD setup.

 

This shows why the compression library needs to-be able to feed the stream fast enough, since on this system+testcase we would always be zlib bound.

 

 

bye

Andy

 

PS: Ofc we only aim to saturate our GBit network, so 120 MB/s would be the goal for us. Zlib simply cant deliver such compression speeds, so having/adding a faster lib would be a good option.

Link to comment
Share on other sites

Here are the work PC values using Alien Arena.

 

I have a samsung 470 SSD on the client and NAS HDD on the server. The server HDD does around 90-120 MB/s, but has slow seeking.

NOTE: We use primocache with a 6 second write-back delay, to maximize write flush's on the server.

 

 

Plastic current zlib

UploadData - ReadFileContent: 10172 ms
UploadData - CompressFileContent: 41585 ms
UploadData - SetRevisionData: 11532 ms
UploadData - CalcHashCode: 4108 ms
Total time uploading data 41980 ms

Plastic no compression

UploadData - ReadFileContent: 10140 ms
UploadData - CompressFileContent: 30 ms
UploadData - SetRevisionData: 23431 ms
UploadData - CalcHashCode: 4301 ms
Total time uploading data 23509 ms

Plastic fastest zlib

UploadData - ReadFileContent: 10339 ms
UploadData - CompressFileContent: 23632 ms
UploadData - SetRevisionData: 11918 ms
UploadData - CalcHashCode: 4482 ms
Total time uploading data 23962 ms

Conclusion: I'm always zlib limited, the values are worse than on the gaming pc, since i only have a older i7 860@2.8GHz compared to the latest i5@3.2GHz.

In the case of "no compression" it seems i'm server write speed limited. This also means if u work on a older/slower pc/laptop, adding a SSD might not do much for plastic checkin speed if compression is enabled, u will be zlib limited.

 

bye

Andy

Link to comment
Share on other sites

1) The "no compression" Alien Arena data seems missing?

2) Going by your data the latest/fastest zlib slowed upload by 10 seconds for alien arena, compared to default? That should not be possible, so the data must be wrong or miss-labeled somehow.

3) In the avi and Alien Arena test u never break 25 MB/s, no matter what settings. Those values contradict my tests, so something is very wrong here.

 

Hello Andy,

 

This time, I've run the tests with the hot disk cache (to avoid my disk performance limitations), so these results will look more as you expected.

 

1) I didn't consider it relevant after my previous tests, because the key stage was not only the compression stage in my environment, but here you have the results:

 

Plastic - no compression

 UploadData - ReadFileContent: 1301 ms
 UploadData - CompressFileContent: 0 ms
 UploadData - SetRevisionData: 35922 ms
 UploadData - CalcHashCode: 4175 ms
 Total time uploading data 35969 ms

As you can see, once I'm not limited by the disk or the compression, I'm limited by the upload stage & the database storage of the checked in data.

 

2) If you check the whole log you'll see the read time was 10 seconds higher than in the default case, the pipeline global result depends on the whole stage times plus the current execution (how the synchronizations between stages work). 

 

I repeated the test with new zlib + fastests + hot disk cache

 UploadData - ReadFileContent: 1411 ms
 UploadData - CompressFileContent: 20454 ms
 UploadData - SetRevisionData: 17034 ms
 UploadData - CalcHashCode: 3653 ms
 Total time uploading data 23250 ms

3) My environment is different from yours, so you shouldn't be worry about I am not breaking the 25MB/s limit. Anyway, I broke that limit in the previous scenario (with the hot disk cache and using the fastest compression).

 

I have to say it's being a pleasure sharing these performance results with you, your feedback was undoubtedly perfect.

 

I also expect this information allows you to understand more about Plastic. 

 

Furthermore, you should be happy having such a great hardware with so few performance boundaries :-)

 

Kind Regards,

    Rubén.

Link to comment
Share on other sites

I have to say it's being a pleasure sharing these performance results with you, your feedback was undoubtedly perfect.

I also expect this information allows you to understand more about Plastic. 

Furthermore, you should be happy having such a great hardware with so few performance boundaries :-)

 

Kind Regards,

    Rubén.

Thx for your time, i hope this helps the dev team to consider other alternatives to zlib in the future, so u can keep up with the increasing SSD/Network speeds.

My hardware is actually pretty average at work, as game developer u want a SSD + multi core, since it helps for VS compile speeds/lightmap building. U also need to start the game very often, which may involve long loading times, so reducing those is essential in those environments.

thx

Andy

PS: I also suspect that my old freeBSD x64 bug did relate to a bad zlib compile, since the error message i got was similar to the one i got trying to disable compression on my own. So hopefully i can do the final plastic server setup using freeNAS. Not sure if we want to use ZFS or UFS as filesystem.

Link to comment
Share on other sites

Thx for your time, i hope this helps the dev team to consider other alternatives to zlib in the future, so u can keep up with the increasing SSD/Network speeds.

 

Of course, we'll add configurable compression options (even for file extensions)

 

We're also working on some improvements for the checkin uploading process in high perf environments. I'll let you know when they are ready in case you are interested in testing them.

 

If you have any other doubt, please don't hesitate to contact us through the support mail or mine (if you prefer this way).

 

Rubén.

 

PS: I remember you commented in some previous post. In our Gbps network we're reaching the 70-80% of the theoretical limit.

Upload speed   = 675.68 Mbps.   Time uploading 256MB = 3031ms.     Server: 192.168.1.69:6060
Download speed = 804.08 Mbps. Time downloading 256MB = 2547ms.     Server: 192.168.1.69:6060

Upload speed   = 672.14 Mbps.   Time uploading 256MB = 3047ms.     Server: 192.168.1.69:6060
Download speed = 799.06 Mbps. Time downloading 256MB = 2563ms.     Server: 192.168.1.69:6060
Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...