Computerized data recording is common in engineering. There are dozens of such systems available; dedicated multi-user, multi-tasking systems that can even use the analysed data to automatically control subsequent test procedures. On a smaller scale, anyone with a PC can gradually build their own small system out of commercial modular components such as a plug-in board and general purpose off-the-shelf data analysis software. However, the best of these are seldom adequate for the in-the-field race car requirements of vibration, noise, and heat resistance, and cost, portability, and size restrictions, the reason for development of dedicated race systems.
About the same time, Honda racing engineers asked me to come in and discuss systems that might be useful in their new F1 activity. After a two hour monolog on my part, they asked some questions that indicated they knew a lot more than they were telling. As it turned out, they were about to sign with the Williams Team, where Frank Dirney was already using a 4-channel tape recorder on his cars to gather data for later computer analysis.
Eventually, Williams/Honda made a deal to share computer technology with an American CART team headed by Ian Reed, a former March engineer with a strong background in computer applications to simulation and design. So Reed brought in a hardware expert, Kurt Borman, who spent a couple of years developing a data-taking and analysis system, later marketed as the Motech. It started with 4 channels, taking data at 1/2 second intervals, analyzing them on an Apple, and worked up to a portable computer, analyzing at least eight channels simultaneously.
Although drag racing seems like a low-tech sport which contributes little to circuit racing, it's interesting to note that their data recording requirements may be even more demanding, where things happen so fast -- sub-5-second runs over 300 mph -- that only a computer can handle them. As early as the 1983 season, the Kenny Bernstein team was developing a system that could take 32 channels of data at 100 samples per second. Eventually Bernstein's engineers came up with a simpler and more marketable system called Racepak, which will be described later.
In 1985, Harry Gant's NASCAR Chevrolet had an extremely sophisticated telemetry system. The onboard data acquisition system was a 40-pound box that could continuously monitor 24 different sensors at a rate of 55 samples per second. The data was then transmitted to a receiving and analysis console via a 4-watt transmitter, and a 30-foot receiving antenna. However, at the end of that season, NASCAR officials became wary of such systems actually being used to monitor a car during a race, and outlawed them.
Incidentally, data telemetry will not be covered in great detail here, even though it sounds exotic and is commonly used in Formula One and CART teams. My own experience with it convinced me that it was seldom worth its development and maintenance problems unless there was someone on it full-time. When you get down to reality, not that many applications outside of engine development and "maybe" oversteer/understeer balance demand instantaneous real-time observation of data while the car is on the track.
Over the last couple of decades, computerized DAS has evolved along with advancing computer technology, driven by many overlapping commercial interests: (a) Factory engine development engineers need faster and more precise control systems, especially in telemetry, and they can afford the best. (b) Chassis development engineers can count on the DAS manufacturers to continually provide them with just a little more capabilities that they can apply. (c) The dash gauge and instrument manufacturers increase their product sophistication by incorporating memory that can also be downloaded for later analysis.
It's not appropriate to focus on specific DAS products in a book that faces years in print, while the products change monthly. The key is in knowing the most important general selection factors, or how to make an intelligent choice for your specific needs. Even when software demo discs are available, unless you try all of them, you may be so impressed with one that you miss something important that's available from another company.
Initial cost will probably be your first consideration, if you aren't lucky enough to be with a top series pro team where budget is no problem. However, you should really think in terms of cost/effectiveness. You could probably learn everything you need to know with a four-channel system for about $2000, but it would take a lot more time and laps. As the cost of track time and car wear escalates, amortizing a more expensive system over years of saved test days makes a lot of sense. Costs have tended to hold steady in roughly a $500-1000 per channel range, depending on software bells and whistles -- plus the host computer cost, which can be justified for many other applications. On the other hand, an experienced electronics technician could put together his own system for half that, using inexpensive modules and general purpose analysis software like Labview, DaDisp, MatLab, or Excel.
Precision or resolution may be a key issue in logger selection, depending on your specific test requirements. Common 8-bit precision allows you to make each measurement to one part in 256 (two to the eighth power) over your full-scale range. That is, if you have a 200 mph range, each step increment will be 0.8 mph. If changes in the measured quantity are fairly slow and smooth, your program can mathematically interpolate to smaller fractions. Conversely, 12-bit precision may be overkill, at one part in 4096 (two to the twelfth power). Or each increment in a 200.00 mph range will be .05 mph. Remember to note the distinction between data precision and operating system precision, which might have 32-bit bytes.
Channel capacity required in the logger will depend on the complexity of the tests and the operator's experience. Starting out, eight channels can satisfy most needs, and any more can overwhelm all but a full-time engineer. Expandability in terms of added plug-in channel modules, may be an important consideration if you expect to be growing with the system. Over 32 channels may be available on a logger -- but then you have to consider the added cost and maintenance of 32 transducers.
Channel types are as important as number, since requirements are different for analog sensors such as temperature or pressure, and digital channels for rpm, speed, distance, and events such as laptime or segment timing from external trackside beacon inputs.
Maximum sample rate you'll need should be at least 3-4 times the maximum oscillation rate of your data. A 20-Hertz system would be adequate for overall vehicle dynamics, aero loads, and driver evaluations. At the other extreme, analysis of engine vibrations, injector pulses, or torque fluctuations on each revolution could require 1000-Hertz to get meaningful data. Faster sampling gives greater precision -- but it can also produce excessive amounts of data. The best restraining force is to frequently look at the mountains of data that are ignored and/or thrown away, and reduce expectations accordingly. It's hard to comprehend -- even with computer analysis -- a megabyte of data (24 channels X 500 samples/sec X 80 seconds) for every lap.
Harness connector size, durability, and availability should be considered, especially if you expect to be modifying the system, as opposed to having a permanent pre-wired harness. Some of the more sophisticated aerospace connectors are extremely expensive, and so small you need a magnifying glass and expensive tools to assemble them. On the other hand, you'll probably find that the most common system failure will be in the connectors, so don't expect Radio Shack hardware to stand up.
Total memory capacity of loggers is reaching a level where they should easily meet any data acquisition demands between pit stops. Already, many megabytes of data can be stored on a removable memory card the size of a stick of gum. Compacting it more just makes it easy to lose.
Size and weight are hardly important considerations any more, even in formula cars, since memory and processors have shrunk to the point where their container size is limited by space required for the harness plugs, switches, and readouts.
Training, track support, spares. In high-tech companies, size matters. They should be big enough to have people dedicated to personal service. Unless it is a very simple system, there should be some sort of training program, at least classroom, if not one-on-one. Ideally they will have enough customers to justify a track support engineer at major events, equipped with spare hardware and minor wiring repair facilities. This is expensive service, and should be factored into the original price consideration.
Familiarity to users. In some series, an overwhelming consideration may be the established "default standard," a brand that has established a near monopoly. This means that if you want convenient support, and the ability to pirate top crew from other teams, you have to follow the herd. At least this is true until another manufacturer comes along with a breakthrough in data acquisition that provides a great competitive advantage.
Demonstrated hardware durability and debugged software. A newly-announced apparent innovation in any technology is only as good as its development program. The world's greatest new thing will drive you crazy if it hasn't been through a "beta test" to identify and fix the inevitable problems that only a million user trials can uncover -- unless you recognize the importance of the innovation, and your role in debugging, and are willing to put up with it (perhaps fixing the bugs yourself to keep an advantage).
Company stability and growth. Historically, there have already been many good DAS products from companies that disappeared for various reasons, leaving "orphan" systems behind. A successful technology company has to put just the right resources into future development -- enough to keep ahead, but not enough to bankrupt them. Or you could buy an inexpensive system and expect it to be disposable.
Upgrade/expansion. Advancements come so fast in this business, that you should expect new hardware to be compatible, and software upgrades to be inexpensive and convenient to install. In fact, new software should be available via the Internet. Hardware upgrades are most commonly more memory, more channels, or faster processing. Trade-in might be possible, making upgrades less expensive, and providing a "trickle-down" affordability to newcomers. The transducers usually have common signal electronics, if not connectors, and can be carried over through many systems.
Component cost/quality/availability. Take a good look at the transducer catalog. Don't buy an inexpensive data logger that requires expensive heavily marked-up transducers, just because they have special connectors installed on commonly available components. The experienced instrumentation technician might save a bundle by buying military surplus transducers, or buying them independently from the source, and wiring them himself. For example, some fairly accurate transducers may be common in most production cars, such as rotary throttle pots.
Selection of graphed channels allows you to tailor the output to your immediate needs. Traditionally you will record far more channels of data than you expect to use, as insurance, then only display what you need in any analysis. And you should be able to easily call them up in pre-selected categories, such as the handling group, the engine group, or the aero group.
Scale/calibration/zero adjustment is necessary to correct for unintentional drift or mechanical shift in the transducer signal. It also allows you to isolate curves by shifting them vertically with an offset, and to scale them up to fill the screen, or to show more detail.
Zoom for time or distance range is a way of focusing in on smaller and smaller increments of time, starting with the "big picture" of all laps in a session, down to visualization of each individual lap, or as small as one individual sample time-step.
Re-zero time base is a way of resetting the time base to zero at the start-finish, for correlation with stopwatches or the track timing system, or isolating laps for easier overlay.
Color-coded data curves, along with a color reference table, are mandatory for fast identification of multiple traces. It's important to be able to assign selected colors for consistent reference and grouping, instead of random assignment.
File storage and access convenience will become more and more important as your data library grows. It will be useful to search and identify data by many categories, such as date, or track, or car, or driver, or test type.
Download method/time. For really large data sets, or for professional series where you might want to download during a fuel stop or tire change, the time and method can be critical. Physical contact possibilities are cable connection or plug-in memory modules or cards of various types and sizes. Non-contact options might include infra-red or telemetry.
Screen arrangement options. More sophisticated analyses may require multiple windows to be visible simultaneously, such as speed trace, gauge readings, and track map. Every user will have a preference as to where and how large the windows are located on the screen, for faster comprehension.
Sample frequency variation on different channels. Instead of sampling all channels at the same rate, it can save a lot of memory space by only taking data at its relevant rate. Engine torque data may require 100 samples/second or more, but inlet temperature won't change over many seconds.
Noise filtering options. Filtering is another subject worth an entire chapter, whether the noise is physical, such as pavement roughness, suspension or lateral vibration, or aerodynamic turbulence, or electronic noise from ignition or radio interference. Instead of simply slowing the sample rate, or averaging data points, it can be useful to have other mathematical filtering options, to avoid hiding real spikes, points of interest, or sudden changes that filtering can smooth over.
Time-base versus distance-base option. It's natural and obvious to plot data as a function of time. But when overlaying curves, the car's instantaneous location will shift slightly due to any variations in speed. This can be cured by automatic conversion to a distance-base, so that the horizontal axis is in feet instead of seconds. It also allows data to be located more accurately when matching track disturbance features that show up in the data.
Select best. It may take some time for the human mind to scan a table of numbers and identify the best lap, so that should be a simple and convenient software function, whether for best lap, best segment, or top speed.
Histograms. Bar charts that summarize data by categories of time increments give a quick visual reference for such factors as time spent at different rpms, different throttles, braking, or left or right lateral g's.
Min/max/average over a defined period. This is another statistical manipulation that is difficult for humans, but a meaningful quick reference for straightaway max or cornering min speeds.
Math channels for data manipulation. This is a way of combining different channels of data to produce another artificial one. For example, a plot of the point-by-point differences in speed between two laps, or plotting the vector sum of lateral and longitudinal acceleration to give a "friction circle," or converting suspension deflections to pitch or roll angle, or combining steer angle, speed, and lateral g's to indicate understeer, or differentiating speed to get acceleration, or acceleration in g's can be compared to engine rpm, driveline ratios, mass, and drag, to read out an engine torque curve.
Curve fit. If noisy data should theoretically fit a smooth exponential curve, then it can help to use an automatic curve fitting routine. Then, generation of the equation for that best curve fit can reduce a lot of data to simple coefficients, such as reducing an aerodynamic coastdown curve to a coefficient of drag.
3-D Graphs. Visualization of three variables simultaneously on three axes, such as isometric carpet or waterfall plots, or by using variations in line color to represent the third variable, as in superimposing another variable on a speed-time trace.
File transfer, to a simulation, spreadsheet, or database. If your DAS software package doesn't have some desired features, your files should be in a format that can be transferred to another that does. For example, using the track map and speed trace data in a lap simulation, or sending the data to a spreadsheet for advanced graphing, or sending it to a database for report writing.
Annotation. It can be useful to be able to add operator or driver comments or engineering setup specs to the data on the screen, to help explain odd data without having to refer to a separate logbook. In the near future it should be possible to add comments in audio.
Animations of gauges, controls, or suspension deflection. This helps to visualize rapidly changing or complicated data, such as gauge readings or chassis motions, especially with telemetry in real time. It also helps the driver recall and understand what he was doing, in slow motion.
Track map creation, with correction for grade and camber. Although it's common to generate a track map from speed and lateral g's, these maps are more for rough location visualization, using the cursor to identify data location, than for accurate vehicle location. When "closing" lap positions don't match up, "fudge factors" are called on. Without full correction for the roll angle contribution to lateral g's, and correction for camber and elevation changes, the map will be just an approximation. Other sources for map error include inaccurate identification of a curve's start and finish, from normal steering noise, and variations in surface traction coefficient.
Track map surveys. For really accurate data matching and simulation, the map might be obtained from engineering surveys (sometimes available from the track owners), or from low speed position recording, perhaps also using GPS, which will become continually more accurate and less expensive. However, knowing the precise track geometry doesn't always identify the driver's path on it.
Superimposed data on track map. This is another visualization trick that plots one or two channels of data such as speed, braking, or lateral g's, perpendicular to the track curve, or vertically in a 3-D plot. This is kind of flashy, but not too useful because the scale factors are tough to illustrate around a curve, and the resolution tends to be poor. Sometimes digital data may be available in text boxes on the map.
Hot link between graphs/windows. When a number of data windows are open simultaneously, it can be convenient to have them linked so that a change in data selection or calibration made to one, is automatically made to the others.
Fast Fourier Transform or FFT. This is a way of plotting data not as a function of time or distance, but with respect to data frequencies. For example, suspension deflection for an entire lap can be reduced to how often it oscillated at different frequencies. You would expect to identify two major peaks on the graph, one at the wheel frequency around 10-15 cycles/sec, and one at the ride frequency of about 2-5 cycles/sec. It can also be used to correlate a suspicious vibration with likely sources known to be rotating or contacting at the same frequency, like gear tooth contact. And it can be used to record and plot a competitor's engine rpm history just from it's sound.
Data alignment, using a manual "nudge" or calculations. When overlaying comparison traces, it can be hard to fit them precisely. A manual nudge function may be used to shift a curve by as little as one pixel. Mitchell Software was the first to present a technique of aligning traces mathematically, using road roughness repeatability when plotting in distance-base.
Multi-user screen tailoring. Teams with more than one data analyst may find it useful to have software that responds with screen arrangements based on the user identification.
Pit strategy. In series which allow onboard computers for real-time race data collection, this may be used to help keep track of fuel and tire consumption, as a driver readout of fuel remaining, or with telemetry to the pit crew.
In-car display (dash). Compatibility between the DAS and dash, may avoid duplication of many functions, from the black box to sensors. It also provides more possibilities for feedback to the driver. Optional displays may be selected for testing, racing, yellow flag conditions, or pitting. Programmable warning condition lights and shift lights may also be available.
Threshold triggers. These can be pre-programmed or driver adjustable, to intelligently focus on critical conditions, instead of on simple single peak values -- such as noting an over-temperature limit, but only if speed is over a certain level. Or it can be used as a recording auto-start, removing that responsibility from the driver, or for higher-speed data sampling. For example, a crash recorder reacting to a high g trigger, which can fill an entire data logger in a matter of seconds to record an impact, but over-writing old data if it's a false alarm.
CAN data bus and smart gauges. Multiplexing many data streams (both to and from the computer), on simpler few-wire harnesses, is becoming more popular. It requires processing power at remote locations, either at a few distant nodes, or at every individual sensor. It requires more sophisticated hardware and software, but may be less expensive in the long run. It allows "smart" sensors and gauges whose functions can be easily modified in the field.
Onboard data manipulation. The data logger can be programmed with its own math channels, to combine information of immediate value to the driver, such as the previously mentioned understeer calculation, or instantaneous speed improvement over a baseline lap.
Multiplexed slow data. A combination of CAN bus and selective sample rates can be especially useful for taking lots of temperature or pressure readings for aerodynamic development, and cramming them all on one channel. However, this may require more careful coding or identification of each signal.
Merging of TV images. A picture may be worth a thousand data points. Small, lightweight, inexpensive digital onboard cameras may be used to incorporate in-car video of the driver's actions, suspension deflections, or even impacts, in perfect sync with the data.
Control capabilities. This is the feature that actually preceeded digital data acquisition, where engine companies were using computers to control spark and fuel, which required electronic sensors and onboard memory. Control requires special hardware that doesn't just record, but can also output signals in real time. Currently, just a few higher level DAS companies provide this feature, because it is primarily useful to engine developers. It can also allow trackside manipulation of 3-D fuel and spark maps.
Computerized chassis control. The engine control capabilities are easily adapted to control anything else on the chassis that can be driven by a servomotor: ride height, anti-lock brakes, anti-wheelspin, differential bias, stability control, aerodynamic devices, etc. Unfortunately, most race sanctioning bodies have become extremely restrictive in this sort of technology, for various justifications.
Onboard in-wheel telemetry. Tire pressure monitors are already commercially available, which transmit four signals to the vehicle computer. So we will probably soon see multiple tire temperatures transmitted real time also. And one tire company is developing tires with magnetic materials imbedded in the sidewall in such a way that their stress-induced deflection can be monitored as a measure of load and traction.
Wearable computers with head-mounted monocle display. These are not for drivers, but for the pit crew. The military already has a voice-controlled, wireless, net-connected, wearable computer for aircraft and vehicle maintenance. And I have already tried out a fingernail-size eyeglasses-mounted screen display and free-space mouse that could make trackside computing hands-free.
Data mining. One of the greatest data acquisition problems is the mass of data that defies fast analysis. Eventually, expert crew chiefs will be studied in great detail, and their mental processes programmed into software that duplicates their analyses and decision-making. This should make expertise more widely available, leveling the playing field somewhat.
Fiber optics. Transmitting data in a race car can be much cleaner with optics, which aren't affected by radiated electrical noises as wiring might be. The main disadvantage is greater difficulty in making good connections. So it may be limited to major runs or pre-assembled lengths.
APPLICATION Making a DAS that can survive in the race car environment of heat, vibrations, and tremendous electrical noise (electromagnetic interference -- EMI, or radio frequency interference -- RFI) can be a real problem. EMI is a concern in ordinary passenger cars, given the output of distributors and the sensitivity of computer-controlled ignitions. But far worse is the case of race cars with no steel body enclosures. Most exposed wires, whether for power supply or transducer signals, act just like antennas, and can easily give you an engine rpm signal on every channel -- whether you want it or not. And if the vehicle battery is used for the logger or transducer power supply, there may be all sorts of sudden surprises on the line. The best approach is to expect the worst. Make sure that every module and transducer is totally enclosed by metal shielding, all connecting cables are externally shielded, and all shields are grounded to each other -- and not the chassis. Components should probably use their own isolated battery power.
Heat and vibration might not be a problem if the logger can be located at the engineers' preference. But all too often space and weight and physical interference problems (and driver prejudice) prevent such freedom. Electronic components are fairly resistant to shock and vibration, but when you mount it on a circuit board (which can vibrate like a diaphragm on its standoffs) and connect loose wires, and hard mount it all to an engine shaking at 15,000 rpm, failure is guaranteed. And with all the necessary electrical shielding, internal heat buildup alone might be worse than external sources.
When it comes to actually taking data on the car, key DAS issues are calibration, confidence, and tracing (or trouble-shooting a lost signal). Instrumentation engineers using interchangeable general-purpose equipment traditionally rely on electronic calibration. For example, a calibration curve on an accelerometer may show it to have 5,000 ohms resistance at 1.0 g's. So they build one circuit to convert that to 1.0 volt per g's, another to convert the voltage to binary-coded-digital (BCD), and then calibrate the output to plot 1.0 g's as exactly 2.00 inches.
A more straightforward calibration method might be called "mechanical through-the-system". You stand the accelerometer up on one end, using the earth's gravity to provide a 1.0 g input, and make a short, steady recording. Then, when playing the data back, simply adjust the gain until it shows 2.00 inches on the screen. Some other calibration reference inputs are: photo optic sensor - 60 Hz "blink" from incandescent lights; suspension deflection - tape measure; steer angle or throttle deflection - protractor; pressure sensors - column of water or mercury; angular rate - motor driven turntable or timed skidpad circle; engine or vehicle speed - hold a constant tach reading; distance - drive stop-to-stop over a measured mile or 1/4 mile.
The accuracy of the mechanical measure should be adequate for most race car work. But more important, it provides a great deal of confidence in the results, especially if the calibration signal is recorded just before and just after each test. If two inches of suspension deflection in the pits indicates two inches on the printout, then you can be fairly certain the 1.5 inches recorded at 100 mph is a real 1.5 inches on the track.
Calibration signals can also be a very handy reference mark in data analysis. If a distinct steady signal can be provided at the start of each run, it may be possible for a program to automatically locate the start of, say, run number 14, and begin its analysis there. It is even more distinctive to provide a bi-directional signal, such as plus and minus 1.0 g, or left and right 90 degrees on the steering wheel.
Since data recording components will inevitably fail at the track, you should frequently scan the output to make damn sure something is being recorded from every transducer, or you may go home and sit down to analyze what isn't there. Because it can be very difficult to trace a digital signal at the track, the safest bet is to use modular "black box" construction and have backup components that you can switch around to localize the problem.
One of the best features of computer analysis is its ability to "clean up" the data or smooth the curves. Most raw data will have interference or "hash" superimposed on it, whether because of vibrations causing the transducer to oscillate above and below the true reading, or an extra signal (say from ignition interference) added to it. In either case, various methods of averaging readings over longer intervals will smooth out the curves. Or if the generic shape of the curve is known (such as a coastdown curve) then a "least-squares" curve may be fitted to the data. There are a number of increasingly sophisticated filtering techniques, going so far as to determine how far each incoming data point is from a predicted curve based on previous data. For more detailed information on data taking and analysis, with lots of example graphs, you need to get Buddy Fey's book, "Data Power: Using Racecar Data Acquisition."