BitcoinTalk

Big endinan code problems

Big endinan code problems

I've seen a lot of messages about some big-endian "issues" (ok, no worky) with the bitcoin client. Before I sit and try to analyze all the source code, are there any known "bad bits" of the code that assume LE?

I have some idle G4 and Sparc machines that can happily be put to use...plus an old decrepit Alpha that needs to be tested to see if it still boots...

Re: Big endinan code problems

I also have a couple of G4 machines, but I think this issue is about much more than getting these old(ish) machines to good use. It is something like the Y2038 problem that some people are already discussing; will everyone keep using x86 all those decades, or should we drop the assumption of that particular architecture?

It is a nice fact that bitcoin happens to compile and run on ARM, one of the most serious contenders of the x86 monopoly, at least in the mobile and embedded space. Nevertheless, a digital currency is too important to be limited in computing architectures. Don't banks tend to use big iron?

Re: Big endinan code problems

I've seen a lot of messages about some big-endian "issues" (ok, no worky) with the bitcoin client. Before I sit and try to analyze all the source code, are there any known "bad bits" of the code that assume LE?

I have some idle G4 and Sparc machines that can happily be put to use...plus an old decrepit Alpha that needs to be tested to see if it still boots...

Do you think that you might have replaced them sometime in the next 28 years?

Re: Big endinan code problems

I have been thinking about an interim solution, inspired by the fact that many architectures can run in either endianness. AFAIK, Virtual PC used this feature of the PowerPC to run Windows on a Mac. Apparently, Linux also supports running little-endian PPC binaries on an otherwise big-endian system.

It will not be as simple as compiling with -mlittle-endian, as the libraries should likely have the same endianness. It might be possible to cross-compile a little-endian system, compile a static binary in the chroot, and run the resulting binary in the usual big-endian system. Or it could be running in the chroot, if that is easier to accomplish.

So far, I have not managed to build such a system, but it might be possible with a suitable cross-compiler. I already use Gentoo and Crossdev for such things, but I have not found a suitable target machine type for this.

Re: Big endinan code problems

The ByteReverse macro should probably be skipped before doing SHA-256 transforms.

Re: Big endinan code problems

The ByteReverse macro should probably be skipped before doing SHA-256 transforms.

before and after? theres several ByteReverse calls that probably need removal for the nonce ant the timestamp also.

in fact you may be able to do away completely with the temp block header thats mostly just there so it can be ByteReversed.

Re: Big endinan code problems

The ByteReverse macro should probably be skipped before doing SHA-256 transforms.

before and after? theres several ByteReverse calls that probably need removal for the nonce ant the timestamp also.

in fact you may be able to do away completely with the temp block header thats mostly just there so it can be ByteReversed.



I think all of them can go away since SHA-256 expects it's bytestream to be big-endian. The fastest way to find out I think is to run the code through a debugger on both a BE and LE machine at the same time and compare results at every step.

Re: Big endian code problems

The code assumes little-endian throughout and was written with the intention of never being ported to big-endian.  Every integer that is sent over the network would have to be byte swapped, in addition to many dozens of other places in code.  It would not be worth the extra sourcecode bloat.

Big-endian is on its way out anyway.