As a bit more, the reason for this is that by default, your code works faster if everything is on the byte mark (8 bits). This way, if you have a structure that has parts split on the byte mark (say 12 bits to one, 4 bits to another), to get the 12 byte part, your computer has to read the word, then mask away the parts it doesn't care about, and shift it. Same for the 4 bit part. That is a lot of work.
So instead of always trusting the programmer, it just aligns them up on the byte marks, so at worst it would have to mask away part of the word it reads in, but not shift. For Subspace, they tried to fit as much as possible in as little as possible to save bytes sent over the network when a 28k modem was considered fast, thus why you have to use this packing feature with structures.
If you look at most network games (ie: Quake), they waste tons of packet space. But they found out that they didn't need to fit as much as possible, because they didn't have to worry that 80 people would all be very close to each other, and the average bandwidth most users would have is at most 5KB/s.
Dr Brain - Sat Dec 26, 2009 10:36 am
I think you mean the 4-byte marks (on x86 at least; it's 8/16 on x86_64). #pragma pack(1) aligns things to the byte marks. I don't think either one changes how the bitfields are done.
Samapico - Sat Dec 26, 2009 11:08 am
Bak answered this one-liner after a more detailed answer on ssforum; I got around the problem, and I actually don't even need that struct anymore. To ensure complete portability, I filled in my non-packed struct from the raw data with bit shifts and masks