14

因此,我一直在为这个问题苦苦挣扎一段时间,并且没有任何运气利用互联网的智慧和有关该主题的相关 SO 帖子。

我正在编写一个使用无处不在的加速度计的 Android 应用程序,但即使在休息时我似乎也会收到大量的“噪音”,并且似乎无法弄清楚如何处理它,因为我的读数需要相对准确的。我认为可能是我的手机(HTC Incredible)功能失调,但传感器似乎与我玩​​过的其他游戏和应用程序配合得很好。

我尝试过使用各种“过滤器”,但我似乎无法将注意力集中在它们周围。我知道必须以某种方式处理重力,也许这就是我出错的地方。目前我已经尝试过这个,改编自一个SO answer,它指的是 iPhone SDK 的一个例子:

                accel[0] = event.values[0] * kFilteringFactor + accel[0] * (1.0f - kFilteringFactor);
                accel[1] = event.values[1] * kFilteringFactor + accel[1] * (1.0f - kFilteringFactor);


                double x = event.values[0] - accel[0];
                double y = event.values[1] - accel[1];

张贴者说要“玩” kFilteringFactor 值(在示例中为 kFilteringFactor = 0.1f),直到满意为止。不幸的是,我似乎仍然听到很多噪音,而这一切似乎只是让读数以小数形式出现,这对我没有太大帮助,而且似乎只是让传感器不那么敏感。我大脑的数学中心也因多年的忽视而萎缩,所以我不完全理解这个过滤器是如何工作的。

有人可以详细解释一下如何从加速度计中获得有用的读数吗?一个简洁的教程将是一个难以置信的帮助,因为我还没有找到一个非常好的教程(至少针对我的知识水平)。我感到沮丧,因为我觉得这一切对我来说应该更明显。任何帮助或指导将不胜感激,当然如果需要,我可以从我的代码中提供更多示例。

我希望我不要过多地用勺子喂食;我不会问,除非我已经尝试了一段时间。看起来其他 SO 成员也有一些兴趣。

4

3 回答 3

5

部分答案:

Accuracy. If you're looking for high accuracy, the inexpensive accelerometers you find in handsets won't cut the mustard. For comparison, a three-axis sensor suitable for industrial or scientific use runs north of $1,500 for just the sensor; adding the hardware to power it and turn its readings into something a computer can use doubles the price. The sensor in a handset runs well below $5 in quantity.

Noise. Cheap sensors are inaccurate, and inaccuracy translates to noise. An inaccurate sensor that isn't moving won't always show zeros, it will show values on either side within some range. About the best you can do is characterize the sensor while motionless to get some idea how noisy it is and use that to round your measurements to a less-precise scale based on expected error. (In other words, If it's within ±x m/s^2 of zero, it's safe to say the sensor's not moving, but you can't be precisely sure because it could be moving very slowly.) You'll have to do this on every device, because they don't all use the same accelerometer and they all behave differently. I guess that's one advantage the iPhone has: the hardware's pretty much homogeneous.

Gravity. There's some discussion in the SensorEvent documentation about factoring gravity out of what the accelerometer says. You'll notice it bears a lot of similarity to the code you posted, except that it's clearer about what it's doing. :-)

HTH.

于 2011-02-02T22:36:28.560 回答
5

To get a correct reading from the accelerometer you need to use the equation speed = SQRT(x*x + y*y + z*z). Using this, when the phone is at rest the speed will be that of gravity - 9.8m/s. So if you subtract that (SensorManager.GRAVITY_EARTH) then when the phone is at rest, you will have a reading of 0 m/s. As for noise, Blrfl might be right about cheap accelerometers, even when my phone is at rest, it continuously flickers a few fractions of a metre per second. You could just set a small threshold e.g 0.4m/s and if the speed doesn't go over that, then it is at rest.

于 2011-02-03T15:24:56.953 回答
1

How do you deal with jitteriness? You smooth the data. Instead of looking at the sequence of values from the sensor as your values, you average them on an ongoing basis, and the new sequence formed become the values you use. This moves each jittery value closer to the moving average. Averaging necessarily gets rid of quick variations in adjacent values.. and is why people use the terminology Low (frequency) Pass filtering since data that originally may have varied a lot per sample (or unit time) now varies more slowly.

eg, instead of using values 10 6 7 11 7 10, you can average these in many ways. For example, we can compute the next value from an equal weight of the running average (ie, of your last processed data point) with the next raw data point. Using a 50-50 mix for the above numbers, we'd get 10, 8, 7.5, 9.25, 8.125, 9.0675. This new sequence, our processed data, would be used in lieu of the noisy data. And we could use a different mix than 50-50 of course.

As an analogy, imagine you are reporting where a certain person is located using only your eyesight. You have a good view of the wider landscape, but the person is engulfed in a fog. You will see pieces of the body that catch your attention .. a moving left hand, a right foot, shine off eyeglasses, etc, that are jittery, BUT each value is fairly close to the true center of mass. If we run some sort of running averaging, we'd get values that approach the center of mass of that target as it moves through the fog and are in effect more accurate than the values we (the sensor) reported which was made noisy by the fog.

Now it seems like we are losing potentially interesting data to get a boring curve. It makes sense though. If we are trying to recreate an accurate picture of the person in the fog, the first task is to get a good smooth approximation of the center of mass. To this we can then add data from a complementary sensor/measuring process. For example, a different person might be up close to this target. That person might provide very accurate description of the body movements, but might be in the thick of the fog and not know overall where the target is ending up. This is the complementary position to what we first got -- the second data gives detail accurately without a sense of the approximate location. The two pieces of data would be stitched together. We'd low pass the first set (like your problem presented here) to get a general location void of noise. We'd high pass the second set of data to get the detail without unwanted misleading contributions to the general position. We use high quality global data and high quality local data, each set optimized in complementary ways and kept from corrupting the other set (through the 2 filterings).

Specifically, we'd mix in gyroscope data -- data that is accurate in the local detail of the "trees" but gets lost in the forest (drifts) -- into the data discussed here (from accelerometer) which sees the forest well but not the trees.

To summarize, we low pass data from sensors that is jittery but stays close to the "center of mass". We combine this base smooth value with data that is accurate at the detail but drifts, so this second set is high-pass filtered. We get the best of both worlds as we process each group of data to clean it of incorrect aspects. For the accelerometer, we smooth/low pass the data effectively by running some variation of a running average on its measured values. If we were treating the gyroscope data, we'd do math that effectively keeps the detail (accepts deltas) while rejecting the accumulated error that would eventually grow and corrupt the accelerometer smooth curve. How? Essentially, we use the actual gyro values (not averages), but use a small number of samples (of deltas) a piece when deriving our total final clean values. Using a small number of deltas keeps the overall average curve mostly along the same averages tracked by the low pass stage (by the averaged accelerometer data) which forms the bulk of each final data point.

于 2015-12-06T06:23:18.590 回答