LoginLogin
Nintendo shutting down 3DS + Wii U online services, see our post

&B/&H Clarification

Root / Programming Questions / [.]

ProKukuCreated:
I’m not one to ask questions very often, more of a lurker who figures things out through what’s already provided. However, it’s been bugging me for the longest time that there isn’t a thread (from what I can tell) addressing the uses of &B and &H. I know that &B is for binary values, and &H is for hexadecimal values, but I’ve seen them used in a lot more cases than that, such as SPCOL, and even data for specific colors..? Basically, I’m just trying to figure out what in all you can do with &B/&H.

Binary and hexadecimal are just number systems like decimal is. Because of the fact that computers use binary, have specific optimizations for dealing with bitfields, and hexadecimal being a more friendly way of writing binary (one hex digit = four bits,) the two are commonly used for expressing certain kinds of data. Specifically, &b/&h are used to write an integer-type value as its internal binary representation (again, hex numbers can be easily translated to binary and vice versa.) e.g these triplets of numbers are all equivalent:
100
&b1100100
&h64

76
&b100110
&h4C

-32
&b11111111111111111111111111011111
&hFFFFFFDF
The reason that it's integers specifically and not floats is that the binary representation of ints is very simple compared to that of floats (specifically, signed twos-complement 32-bit.) It's very easy to write a decimal number in terms of a binary or hex int, but the same is very difficult for floats (floating-point is hard.) Binary IS data to a computer, such it is to some programmers. It can be convenient and memory-efficient to store multiple values in one integer by treating individual groups of bits as separate numbers, and extracting them later with bitwise AND and bit shifting. This is how bitfields and integers are commonly used to represent special values, and not just in SB. Colors In the case of colors, it has been a long-held practice to write colors as a set of hex digits, typically two for each component of red, green, and blue (as visible colors of light are made out of mixtures of these three.) The order varies but is most commonly red-green-blue (hence RGB.) The typical notation for "hex colors" is #rrggbb where the letters correspond to the hex digits of each color component. Thus the brightest red you can imagine is #FF0000, a turquoise is #00C0D0, etc. Colors in SB follow the same practice but also have an alpha channel, which controls the opacity of the color (how transparent it is.) SB4 has full opacity support but SB3 only applies it to SPCOLOR. Most often the alpha channel is turned all the way up so the color is fully opaque. So a fully opaque red would be written as &hFFFF0000, and thus the hex digit patter for a SB color value is &haarrggbb. It's a lot more resource-cheap (especially in hardware) to represent individual pixel colors as a single packed binary value, or really, a sequence of bytes, and it's a commonly-held practice already, so that's the reason. So where you see GCOLOR &hFFFF0000, know that that's red. The color constants (such as #RED aka #C_RED in SB4) represent a preset color in this format and are converted into their underlying integer value when your program is loaded. Note also that the RGB() function takes three or four numbers and punches them into this integer format for you, in case you're working with variables representing color channels or you don't like the hex notation. Therefore, these are all equivalent ways of writing the color red:
#RED or #C_RED in SB4
RGB(255, 0, 0)
&hFFFF0000
&b11111111111111110000000000000000
-65536
In SB4, the RGB() function is even optimized so that if its inputs are constant (none of them are variables) the expression is converted directly into the integer format when your program is loaded (like the constants) causing no performance penalty. SPCOL This one is actually a bit simpler to understand (or takes less writing.) Basically, SPCOL takes a "collision mask" argument. In this case, they still use a single integer value, but treat it basically as a set of 32 "flags" since an int is 32 bits in size. The default value is &hFFFFFFFF, or "all flags up" if you will, if the argument isn't supplied. This ensures a sprite will collide with any other sprite if the mask is not specified. When two sprites are checked for collision, the masks of both sprites are checked. If both masks share a single flag turned on, then they collide, otherwise they don't. This allows you to control which sprites are considered for collision with other sprites. As an example, pretend there's a sprite named Foo with a collision mask of &b10010. (I'm using less than 32 bits here so it's less annoying to read.) There's two other sprites that Foo might collide with on this frame: Bar, with a mask of &b01000, and Baz, with a mask of &b00010. Let's check the mask to see who Foo collides with:
Foo &b10010
Bar &b01000
Baz &b00010
Foo and Baz both have the second bit in their collision mask as 1, so they will collide with each other. Bar doesn't have this bit set, so it doesn't collide. Again, these are still integers, so you could just as well write them like this:
Foo 18
Bar 8
Baz 2
...but this is extremely inconvenient to the programmer because this notation doesn't convey what the value actually represents: a field of 32 individual collision flags. Again, for the computer, it's cheap and convenient to use 1 packed value instead of 32, so that's the reason a single integer value is in place. The binary and hex notations are given because they've been useful tools for writing down data that isn't exactly numbers, and representing that within a single "number", since the dawn of computing. It's established, it's convenient, it's robust.

Thanks for the detailed explanation! However, there’s a few minor things I don’t exactly understand. When you converted -32 into binary and hexadecimal, how did you end up with &b11111111111111111111111111011111 and &h20? I never really learned how to write negative values in those systems.. Also, I’m not too sure how -65536 is a valid way to represent the color red. If you could answer these two questions that would be great.

Oops, -32 would not be written as &h20 in hex. I'll edit the original post to correct that. It should be
&hFFFFFFDF
. I'll elaborate on the rest later when I'm not busy and I've thought on it.

Also, I’m not too sure how -65536 is a valid way to represent the color red.
As I mentioned, a hex literal is an integer is a binary literal is an integer. All three are different notations to express the same underlying binary data type. Bin and hex literals are a more direct formation of the binary data used to represent an integer, so they might look a bit odd by comparison, but they're the same. That's why &hFFFF0000 and -65536 are equivalent; they're two separate ways to represent the same number. Colors are just specially-structured numbers, but they're in the hexadecimal base instead of the decimal base when they're written out. Writing them in the decimal base is equivalent, but completely unintuitive because it doesn't reflect the actual structure of the color value itself. That's why you shouldn't do that, even if you can. Think about it this way, if color values were instead a six digit decimal number with three components (thus two digits per component) you might write it as 990000. But this isn't the case because it's less convenient for the computer to deal with this in binary form.
When you converted -32 into binary and hexadecimal, how did you end up with &b11111111111111111111111111011111 and &h20 &hFFFFFFDF? I never really learned how to write negative values in those systems..
This is because the integer type in SB is a two's complement signed integer. I'm not really the best person to explain what or why that is, so I'll let Stack Overflow do the talking.

Thanks for your help!