Replies: 6 comments 26 replies
-
Hi,
And what exactly is your expected outcome of reverse engineering ? I am working on automatic calibration for the angle sensors right now. But it will be available only on servo stations. (500/4000 is the main target for now, next will be 600) I also have xray images of few boards from 500 station (psu and angle board) I can make more if necessary. |
Beta Was this translation helpful? Give feedback.
-
If there is actually a genuinely-capable Geodimeter authorized servicer, authorized by Trimble to work on my Geodimeter, left anywhere in North America, I would like to know, and have the opportunity to ask how much they would charge for service. I have an inquiry into the only one I can find in Canada, asking if they have the capability, and I have received no reply as to whether they even still have the capability. I have experienced the case where my old equipment I have sent for service has instead been canabalized by the servicer, and it is sent back to me marked as "unserviceable". To be clear, it came back missing large components I know for sure were there when I sent it in, therefore I was then unable to recover its value by selling it "complete, as-is, for parts". Puzzles are really fun, but puzzles that are not usable in the end as a calibrated unit, that is not as interesting to me. I like old theodolites because, I can recalibrate them until they are busted. They will be able to do what they do 1000 years from now. Bury one in a suitable case in the sand, and someone might excavate it in the far future like the Antikythera mechanism. As great as the auto-calibrate software might be, I have seen a total station calibration lab of the era of the GDMs covered in this effort, and, the physical setup included at least 4 autocollimators fixed around the mount point for the unit under test. I am not certain every authorized servicer had a complete calibration rig, whether some assumptions were made. There may have been differences in the quality of the calibrations from different authorized servicers. The auto-calibrate software must assume the unit is switched on each time in such a physical facility and run through the same checks, with human assistance, or make some other assumptions. I have read and fundamentally understood the approaches being used here to analyze the boards and memory images. At the same time I am astounded by the sophisticated range of hardware and techniques being employed ... even using a thermal camera and a current limited supply to detect bad components ... obviously the best easy method now. I remember when I used to use a current limited supply and I would trace the very small, progressive delta-V in the traces on the board where current was flowing, in a "warmer-colder" approach that lead me toward the offending sink of current. As I had access to both the supply and ground buses, I had at least two trees of traces I could search to locate current sink problems. At that time thermal cameras cost more than a used car and ran on liquid nitrogen. I had access to one of those, once, in one of the labs I worked in. It was certainly not something I could abscond with back to my lab bench and use for just anything. I thought I would share this so people without a thermal camera are not discouraged -- you just need a 4 digit+ voltmeter and patience. So I understand one of the challenges is that the main board memory of the GDM contains some sort of checksum, the software code and constants that in part represent calibrations. One of the problems is, one cannot recalibrate a GDM, at least not write the recalibration constants into the battery-backed memory because then the memory would fail the checksum test on boot up. I am going to assume that if the checksum test were checked in software located on the RAM, then any download of any main board memory would reveal the checksum-checking code allowing it to be reverse-engineered or short-circuited in the RAM image so its output result is always "approve". So long past the sunset of support for these units it would allow people to maintain their own unit in proper calibration indefinitely, by doing a lab recalibration periodically. So, if the code is not in the battery backed RAM the next possibility is an EPROM. At least one EPROM has been identified and downloaded, so I assume the checksum code is not there either or it would have been found. Having reviewed the work done so far, with circuit reconstructions sufficient to reconstruct the memory map, I assume by tracing the address decoding circuitry and the chip-enable outputs from the chip select chip(s) to all other chips, I will assume if the checksum code were in any EPROM, that EPROM it would have been discovered and decoded. That leaves the checksum being hard-coded into something like an FPGA that does not even enable the computer until after the checksum runs. I recall old 8088 desktop computers would do a memory test after bootup (yes, part of the BIOS, yes, stored in EPROM/EEPROM) in the 1980s. It would be conceivable in the mid-1990s to modify a memtest algorithm and augment it with a checksum test and put it into an FPGA. A combined success signal would indicate both that every byte of RAM was functional, and also that it also passed the checksum test. So if the RAM test were:
then the result would be zero if the memory could return what was written to it with no errors at every byte. To modify this to be a checksum and memory test:
Why EOR? because it is bit-wise and reasonably easy to create the proper checksum value for a given RAM image with new calibration values. Also an EOR algorithm coded into an FPGA, the logic for each bit is partitionable, making the algorithm take up less space in the FPGA. Cryptographers might easily have further suggestions with bit-wise roll instructions, but these would be harder to reverse for the code that is generating a valid checksum byte because its value would depend on where in the RAM space the checksum byte is located. Bit-wise rolls of either reg2 or reg1 might also be more space-consuming in an FPGA because they bind the logic for each bit together. I think something like this is about the most complex checksum one might be able to implement on a FPGA of the time (I could easily be wrong). The above algorithm as written has some weaknesses. If there are an even number of bytes checked, then it does not matter what the value of C2 is, all valid RAM images will EOR to zero with any value for C2, if C1 is zero. And if C1 is not zero, I believe all valid RAM images will EOR to C1. One could make the EOR with C2 be conditional on the value of the byte in RAM and/or the value of the ADDR register, so the number of EORs is is not equal to the number of bytes, but also other values. There would be ways to code these sorts of tweaks so that when implemented in an FPGA, it would not to greatly entangle the calculations for each bit. Hummm... how to make it more obscure without making it too complex for an FPGA of the day..... Remember, bitcoin did not exist until 2008, and, FPGAs capable of more complex coding like mining hash generation did not appear immediately. The FPGA in the GDM is probably a decade more primitive. That begs the question, what hash functions were contemporaneous to the GDM and programmable in FPGAs of the day? Another group of candidate checksum algorithms to check. I think two checksum-valid memory images, from two identical units would be sufficient to gain insight into how to make a RAM image pass a checksum test. In the case of the example algorithm, and anything that does an operation over the entire memory, like or in combination with a memtest, the checksum does not need to be a byte at the beginning or the end of memory, it can be an otherwise unused byte of memory anywhere in the battery-backed RAM. Mapping the memory use of all parts of all code in any image will determine which bytes do not serve any function in the code. I would focus on the bytes that are unused and that have different values between the two images as candidates for the checksum byte. As none of these bytes are used by the code for anything else, any of them could be modified to make a correct checksum, even if that was not their original designated purpose. This all assumes that the checksum check is some sort of whole-memory test that is looking for a particular constant, like zero, as the final output. Generating a checksum for any candidate image would involve setting the byte you intend to use for the checksum to zero, run the algorithm to achieve CS1 as the checksum, and then find what value for the checksum byte would make the final checksum equal zero. I may be wrong, but, this sort of algorithm on all RAM bytes is the next logical thing to try, after searching all ROM space for a hard-coded checksum algorithm. I do not think one will have to tinker long with all-memory-byte algorithms to find a solution One could assume any bit-size for the reg1 accumulator. If the data is read in bytes (if the RAM chip has an 8-bit bus) but the reg1 accumulator were say 4 bit, then maybe there was a high nibble, low nibble EOR to reduce a byte to a nibble, I think that would just create ambiguity, freedom of choice of more checksums bytes that would satisfy the algorithm, and a 1 in 16 chance of randomly guessing a checksum that would work without specifically reverse-engineering the algorithm. One could brute force a valid image by hand within dozens of attempts by simply randomly generating an unused RAM byte and trying the image on the unit. An accumulator that is larger than the bit-width of the RAM chip data bus might do sequential reads, so two RAM bytes of checksum are needed, but that I think might take up too much space in FPGAs of the day. There are even faster methods of brute-forcing, by taking the main board and connecting it to a shared memory "virtual chip", where a supervisor system guesses a checksum byte and allows the main board to check checksums, looking for one that is accepted. A hardware implementation of this might check checksums as fast as the main board evaluates them. Again, a general reversing of the checksum algorithm is not necessarily required to produce a valid RAM image with new calibration constants, if one has a spare main board. Personally, I would prefer to pay a reasonable amount of money to a genuinely capable authorized servicer who will not cannibalize the unit I send them, and end up with a with a functional properly calibrated instrument. If it is no longer possible to get authorized service, I should have the right to repair it myself, and that requires the ability to produce RAM images with new laboratory-quality calibrations that the unit will accept as valid, by one means or another. If I were to find a solution, I should not be prohibited from using it myself. I can understand if someone might be angry if I printed the solution on T-shirts -- like the code key for DVDs was once printed on T-shirts. There may also be an electrical bypass. If one has two images, one valid and one invalid, one could look for the difference in the electrical signals coming out of the FPGA going to the propcessor, and determine which one is the PASS signal, and cut a trace and wire it high or low as needed. The basic BIOS ROM code may be more complicated than just looking for a PASS signal in a single check, but then it would look for first for instance a high, then a low on the PASS bit, more clearly telegraphing in the code which one is the PASS bit from the FPGA, and again, I think it likely someone would have noticed this in the code. Another possibility is, the FPGA is programmed as a dongle-like thing, a memory mapped device on the data bus, give it a byte, get a byte, but then again the checksum code would have to be conspicuous and appear in an EPROM somewhere, even if part of the logic is obscured within an FPGA. |
Beta Was this translation helpful? Give feedback.
-
My understanding was that the checksum had to be right or the GDM main body computer would not attempt to start. Therefore, after solving the calibration problem, it seemed the next problem was the checksum, so one could craft a RAM image that would be accepted. I cannot recall whose message I read that lead me to this understanding, but it is here somewhere. I do appreciate the efforts to generate calibrations. I was just thinking ahead to the next step. So a little background on my unit. I purchased my unit "as is" in "good condition". It does appear to be complete, like it was not damaged before being taken out of service but the batteries on the main board may have died then, causing it to be taken out of service. When I bought it I admitted the possibility it was bricked and a hopeless case, not just something that needed new batteries and a download to be functional but out of calibration. I have now verified that no current Trimble dealer in Canada has any ability to service these machines. Even if the unit had been taken to a servicer by a previous owner, if the servicer had at one time the old ram image for my unit, on a floppy diskette somewhere, it has likely been disposed of, even if they are still in business, even if they had not thrown out all the shop tools they needed for a GDM.
After writing my last post, I started to think, any level sensor, even the "compensator" of a GDM can be read in the 0 degree and 180 degree horizontal directions, and the tribrach level twiddled to get the same reading either way. Even a not-perfectly-linear-in-angle sensor, like an old hand and flame-bent spirit level tube, is still level when the angle reading remains constant in a 180 degree swing. If the compensator can be mechanically trimmed to say "level" when it is in fact level, then the GDM can be levelled like a theodolite and the compensator reading level will make the compensator equations not modify angle readings to "compensate". Yes, it would change the levelling procedure a little. Mounting a sensitive, trimmable spirit level on the GDM would probably make this easier, to at least get really close to level. I imagine the non-linear effects of the sine/cos correction table differentiates to zero at a zero reading on the compensator. It really is not much extra work to perfectly level an instrument, keep it under an umbrella so the sun hitting one side does not distort it, compared to chopping brush, or needing a second man to use it at all. Once this is done, then thinking like an old optical theodolite, it would be possible to calibrate the vertical circle level, again by exploiting the rotation of the telescope by 180 degrees at the same target. I used to use a meter stick marked in millimeters at >50 metres, but I later printed out a piece of paper with submillimeter calibration capabilities in a smaller space to get it close, without the realestate or working outside, particularly in winter. The special 30mgon tribrach is not needed. One can draw two lines on a piece of paper that are 30mgon apart when located at a particular distance, and then just adjust a regular tribrach to move from one line to the other in the optics. A metal machinists scale would work too, if hung from a nail on a fencepost at the right distance. The best solution I found is Postscript and paper. The Postscript language is useful for drawing metrically accurate figures on paper, and a PS to PDF converter exists. You can double check the output with calipers to make sure the printer you used did not distort the print. I am just showing options for people without the special tribrach.
Can you point to which document describes this step of a shop calibration for a GDM ? The vertical circle and horizontal circle seem harder. In a GDM shop, using a GDM set up at the test location, one can point it into collimators at preset angles -- putting a different precision instrument at the test point can help verify the angles. For the vertical circle, I have seen collimators at +/-45 degrees, only a two-point check. This would seem to be a check only. For horizontal, I saw two collimators set up at 90 degress as viewed from the test point. This would make it possible on optical theodolites to manually turn the horizontal circle to zero at the first point and then through a series of 90 degree swings, indexing the horizontal circle, take readings at 4 points around the circle to discover off-center circle, elliptical deviation errors. I really dont know how serviceable eliptical errors are, but this will discover them. The GDM probably uses disk encoders, which would have the same problem if they became off-centered. The problem on the horizontal axis is, I do not know that the horizontal circle is externally user indexable like it has to be on optical theodolites. So, the solution would be 4 horizontal circle collimators to detect elliptical deviation errors... The vertical circle, probably the same +/- 45 degrees will work. For the horizontal circle, one can use a field, a good optical theodolite to distribute targets and self-check its own horizontal circle and tweak the targets as needed to get absolute angles independent of small defects in the theodolite horizontal circle. One then puts the GDM at the test point to check its horizontal angle encoding disk. If one has a barn or industrial space, +45 degrees is not much of a problem, but -45 degrees requires a pit or something. Here an optical theodolite really cannot self-check and make sure the targets are not distorted by the flaws in the theodolite, but one can certainly check a GDM against a known good theodolite. All these things are possible for someone with a GDM that is alive, and a good optical theodolite which has a known flawless vertical circle, and some realestate to check it. I have completely skipped over the tough job of putting what is learned into angle tables. And then solving the checksum problem to create a RAM image with the calibration tables, that will be accepted as no corrupted will allow all of these calibrations to be a one-time or once a year thing, when the weather is favourable. Between calibrations, one can actually use the measurements for calculations with uncertainties, in exactly the same way scientists and engineers have been using equipment of all varieties with uncertainties for years. Only surveyors need a machine that is correct to the degree that their methods require, to measure land and produce documents for sale. I don't need that accuracy to generate a topo map of a section of my own property, for my own uses, to calculate maybe how much dirt I need to move to level some area. My tracking robot GDM would make it a 1 man job. I have theodolites. It is all theoretically possible to do that same job with two theodolites, at good monuments, overlooking the area of interest, but that is, minimum, a two man job, while swatting flies, which the second man does not do unless he is getting paid. Theodolites involve a lot of manually recorded data. All points have to be visible by both theodolites -- that is a lot of brush to cut. With the GDM at one high point visible from two locations, I could for instance measure the total drop of my creek weekly in different states of flow, which I have done the hard way only once, on one day, with a theodolite. I have to cut less than half the amount of brush if I use a GDM. The same thing that made a GDM the choice of engineers in 1995, makes them a useful choice now too. There is a plateau of usefulness between an uncalibrated, functional device running someone else's RAM, and calibration and the calibration paperwork needed to support a surveyor-level-calibrated device. It seems to me that examining and comparing two RAM downloads from two machines would be useful for other reasons besides solving the checksum problem. Unfortunately, I do not think I have even one live RAM to contribute to that cause. |
Beta Was this translation helpful? Give feedback.
-
Dumb question, but, did anyone check the Wayback Machine and other possibilities ? I assume you know the web address. |
Beta Was this translation helpful? Give feedback.
-
All of this discussion has brought this project back in to my focus. I'll work today on getting that PR I've been talking about together to add everything I've found to a useful place. |
Beta Was this translation helpful? Give feedback.
-
OK, so my long code thing I wrote about combining a memory check and a checksum check, was completely off. I have not eliminated that the FPGA does some bit messing mechanics on bytes that are written to it and read from it, to obfuscate, like a sort of dongle, but, I have found the two code routines that calculate the "CRC", and a second CRC like-thing with a similar pattern of code to the first one. I am quite sure of the first one because its output is later printed as "CRC=(16 bit word)" to the serial port I believe, in another piece of code. There are subroutines the two algorithms call in a similar pattern that I have not figured out yet, but there is a lot there, an extensive algorithm, with most, if not all, exposed in the EPROM code. Both routines are part of a jump table also, which suggests both of these algorithms are designed to be run by the battery-backed RAM image. If the FPGA is not involved like a dongle, then there is now hope of being able to create new RAM images with updated calibrations based on periodic lab work, or periodic tedious field checks, if one has an optical device with a known good vertical circle. |
Beta Was this translation helpful? Give feedback.
-
It is a delight to find someone else going deep in to these units!
Two topics, one short and one long:
What was the root cause of your original failure to turn on? I have a 608S that has pushed me down the same reverse engineering path you went. So far I'm suspecting shorted tantalum capacitors sprinkled throughout the unit.
It would be nice to have a repository of binary dumps, files, and disassembly progress. Is there a reason you haven't shared what you've got already in this repo? I'd like to contribute mine as well but before making a pull request I thought it prudent to ask. I have dumps of all three EPROMs and Ghidra decompilation progress on the main control 8051. I'm working on getting the RAM; I suspect my RAM data is intact. Additionally I have teardown images and the beginnings of some hardware reverse engineering that would be nice to have here. I'm happy to organize those and submit a pull request with that data if you feel it appropriate.
Beta Was this translation helpful? Give feedback.
All reactions