ProtoBuffers seems almost too good to be true. Are there any significant downsides to using ProtoBuffers for serialization? Does it require a lot of CPU to serialize/deserialize? Does it require special software?
You are viewing a single comment's thread from:
(it's Protobuf, @cm-steem :))
The only downside I can think of is that it's binary so it's more difficult to read off the air. I use Protobuf at work to publish data from a microcontroller to an Android app and a web service. Doing that in text with lexical interpretations would be a nightmare.
To further compress the binary serialization we could use 16 byte binary representation of the CPIDs instead of using it's hexadecimal form. I suspect that's where a lot of the storage goes.
Do you think that's possible via flat buffers or grpc?
Do you have more details on how this can be done in python? Do you mean compresing the string or just converting the CPID from a string to binary?
The files would be far smaller if the CPID was omitted, relying on userId instead & perhaps constructing a separate index for userId:CPID for quick lookup.
Never heard of those :)
Sure. Change
User.cpid
fromstring
tobytes
and assign using hex conversion:>>> cpid = '5a094d7d93f6d6370e78a2ac8c008407' >>> len(cpid) 32 >>> cpid.decode('hex') 'Z\tM}\x93\xf6\xd67\x0ex\xa2\xac\x8c\x00\x84\x07' >>> len(cpid.decode('hex')) 16
It does make it more tedious to use but there should be a significant reduction in size.
For a fairer comparison I should time how long it took to write to disk.
Downsides of proto buffers is just that it's slightly confusing to work with at first, but now we've got an established proto file it's easily replicated.
Doesnt need much cpu to serialize/deserialize, however I don't have the stats to back that up.
In terms of special software, just the protobuf3 software package - there should be alternative language implementations for interacting with the files in c++ for example.