An rpi based DIY Vibration meter - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 49Likes
Reply
 
Thread Tools
post #1 of 153 Old 12-16-2016, 12:43 PM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
An rpi based DIY Vibration meter

Current Release: 0.3.1 - docs at http://vibe.readthedocs.io/en/latest/

windows exe available via https://github.com/3ll3d00d/vibe/releases/tag/0.3.1

demo video


Features
- collecting data from 1 or more connected sensors concurrently
- user defined target curves (via text based format or from uploaded wav files)
- charts available for each measurement
- time series data (vibration, tilt, raw data)
- frequency response data (spectrum, peak spectrum and psd)
- measurements can be analysed by axis of vibration or using a summed response (calculated using a root sum of squares method with more weight placed on x and y axis vibration)
- charts available for target curves
- frequency response data (spectrum, peak spectrum and psd)
- allows user to compare 1-n data sets in a single graph including target curves
- allows user to normalise curves against a chosen reference series

TODO
- RTA mode
- coordinated playback and measurement (i.e. send signal to media player to start playing and schedule a measurement alongside it)
- edit measurement data (slice time periods, edit metadata)

Spoiler!
SBuger, dominguez1, EarlK and 2 others like this.

Last edited by 3ll3d00d; 04-13-2017 at 07:50 AM.
3ll3d00d is online now  
Sponsored Links
Advertisement
 
post #2 of 153 Old 12-16-2016, 03:10 PM
AVS Forum Special Member
 
derrickdj1's Avatar
 
Join Date: Sep 2011
Posts: 3,624
Mentioned: 94 Post(s)
Tagged: 0 Thread(s)
Quoted: 941 Post(s)
Liked: 657
You have to post a lot pic's along the way. This may be a first on this thread!
derrickdj1 is offline  
post #3 of 153 Old 12-16-2016, 04:45 PM
AVS Forum Special Member
 
dominguez1's Avatar
 
Join Date: Dec 2008
Posts: 3,202
Mentioned: 129 Post(s)
Tagged: 0 Thread(s)
Quoted: 1060 Post(s)
Liked: 837
dominguez1 is offline  
 
post #4 of 153 Old 12-16-2016, 06:02 PM
Advanced Member
 
FriscoDTM's Avatar
 
Join Date: Oct 2011
Location: SF Bay Area
Posts: 746
Mentioned: 9 Post(s)
Tagged: 0 Thread(s)
Quoted: 278 Post(s)
Liked: 261
This is a great project - I almost picked up one of the arduino accelerometer boards to play with but figured it would sit on the shelf and was beyond my skill level. It will be very cool if you can use it to time align subs and transducers to maximize constructive interference.

Display 2013 Samsung 75" UN75F6300 TV Media Xbox One S, Dune Solo, QNAP TS-453A NAS
Pre/Amp Marantz 7702mk2 Processor + DIY 11ch Icepower 50ASX2BTL Monoblock Amp Amp Build Link
7.2.4 Speakers DIYSG 1099s (LCR), Volt 10LX (SL, SR), Volt 8LX (BL, BR), Volt 10LX (TF, TR) Volt Build Link
Subs Dual 19Hz DIY Ported UXL-18s + FP14k Amp + MiniDSP 2x4 Balanced Tactile Crowsons + iNuke 6000DSP
FriscoDTM is offline  
post #5 of 153 Old 12-16-2016, 07:06 PM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
Sounds like a neat project.
The thought has crossed my mind before. But then I got lazy and said meh to myself.

Quote:
Originally Posted by 3ll3d00d View Post
Software
- python on data analysis duties (numpy & scipy seem to do everything that is really necessary)
- python/c to collect the data via the i2c bus (a few people have posted code on github that provides working copies of this)
- python on webapp/microservice duties (flask?)
- reactjs for the front end
I'm no python expert but flask and reactjs sounds bloated.

Research this way instead, it should run faster and with less headaches.

import asyncio
import websockets
http://websockets.readthedocs.io/en/stable/intro.html

Code:
#!/usr/bin/env python

import asyncio
import datetime
import random
import websockets

async def time(websocket, path):
    while True:
        now = datetime.datetime.utcnow().isoformat() + 'Z'
        await websocket.send(now)
        await asyncio.sleep(random.random() * 3)

start_server = websockets.serve(time, '127.0.0.1', 5678)

asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
Code:
<!DOCTYPE html>
<html>
    <head>
        <title>WebSocket demo</title>
    </head>
    <body>
        <script>
            var ws = new WebSocket("ws://127.0.0.1:5678/"),
                messages = document.createElement('ul');
            ws.onmessage = function (event) {
                var messages = document.getElementsByTagName('ul')[0],
                    message = document.createElement('li'),
                    content = document.createTextNode(event.data);
                message.appendChild(content);
                messages.appendChild(message);
            };
            document.body.appendChild(messages);
        </script>
    </body>
</html>
I would code two websockets:
1) one to serve the above HTML to the browser
2) and the above websocket to do the real-time HTML/async JavaScript data updates.

Not a whole lot of code is it? It is pretty much already coded for you! Just replace the UTC time code sender with the actual content you want to display.

I'm not sure what lib's Python has, but surely there must be either a chart Jpeg rasterizer or maybe HTML5 vector graphics if you want to get real fancy.
(Unless flask and reactjs does that already... in which case: as you were...)
BassThatHz is offline  
post #6 of 153 Old 12-16-2016, 07:46 PM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
Quote:
Originally Posted by 3ll3d00d View Post
Software
- python on data analysis duties (numpy & scipy seem to do everything that is really necessary)
- python/c to collect the data via the i2c bus (a few people have posted code on github that provides working copies of this)
- python on webapp/microservice duties (flask?)
- reactjs for the front end
There is only one gotcha that might not be obvious to you.

Not sure how much web server development you have done, but they are ALL multi-threaded by default. No matter what framework/API/Lib you use.

So that async function pointer for example, will be running in a multi-threaded context.
You'll have to ensure your IPC's are thread-safe or apply a locking strategy.

I'm more of a C# person myself, and in that language you could either make the class\functions\variables static and then read the static data variable/array (a dirty read-only operation).
Both of which are thread-safe and require no locking.
(In the windows world you have to do another step: Set IIS to never kill the net worker process.)
Whatever the Python equivalent of that is...

I know C++ and C# have a static keyword. Not sure about Python or C though...

If you can do static, then you don't need IPC's as there is only one instance to deal with. Which may make your life easier (and the code: faster).

That said, if you make the app state-less, you might be able to just "deal with" the overhead of instantiating a new object reference for each async web request. That's kind of a lazy-man's way of achieving the above though.

At the end of the day it's up to you which direction you want to go. I just gave you 3 possible options.
If you care about performance I'd personally do static if you can, or IPC with locking if not, and state-less as a last resort.

Last edited by BassThatHz; 12-16-2016 at 08:05 PM.
BassThatHz is offline  
post #7 of 153 Old 12-16-2016, 09:54 PM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
Quote:
Originally Posted by FriscoDTM View Post
to time align subs and transducers to maximize constructive interference.
The speed of sound through wood is much higher than air.
Like 10,000ft per second vs 1,000ft per second.
So for every foot of air distance to your sub you'd probably have to add 3-7ms of delay to the transducer. (At least, in theory...)
BassThatHz is offline  
post #8 of 153 Old 12-17-2016, 02:51 AM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
the UI is going to be pretty simple given that if I were writing this for purely personal use, the UI would be a bunch of scripts It will basically be a couple of forms that let you select a dataset to work with, choose what sort of graphs you want to see, perhaps allow you to override some of the analysis parameters and then show you some graphs. There's no realtime interaction with the measurements like some sort of rpi speclab in a browser, it's purely offline analysis with perhaps some visual indicator that a measurement is in progress. As such there are no concurrency concerns on that front. We'll see how fast the pi is at reading & writing the data out though. The MPU-6050 has a FIFO buffer which means you only need to read a chunk of data about every 600ms or so, I can imagine needing to hand the data off to another thread to write to disk but we'll see.

flask looks fairly lightweight as far as I can see, never used it but the assorted examples look pretty simple. There is an even simpler alternative called bottle mind you. I'll just see how it goes. react is pretty simple as well and it makes writing a ui pretty simple. Ultimately I just want to knock out a functional ui as quickly as possible.

The main issue I see at the moment is cable length from the GPIO to the accelerometer, an i2c signal seems v sensitive to cable length if you want high sample rates so cables are typically ~20cm at most which is not very far at all & might make isolating the accelerometer from the rpi tricky. This would be a good argument for getting an accelerometer with an analogue output and then feeding that back in through an ADC. It's probably a good thing I have an accelerometer already as I will be able to compare results from the two sources and then make a judgement on how to proceed.

I'm not sure if SPI can support longer cables, if so then the MPU-6000 is an option as that is basically the same device with an SPI interface. There is also a newer MPU-9250 but they don't generally seem readily available on a breakout board. I did find https://drotek.com/shop/en/home/264-...out-board.html though. I'll probably go with the i2c version to begin with and then revisit if the cable length is a real issue.

Last edited by 3ll3d00d; 12-17-2016 at 09:54 AM.
3ll3d00d is online now  
post #9 of 153 Old 12-17-2016, 07:53 AM
AVS Forum Special Member
 
coolrda's Avatar
 
Join Date: Nov 2002
Location: Bakersfield, Ca
Posts: 3,133
Mentioned: 150 Post(s)
Tagged: 0 Thread(s)
Quoted: 937 Post(s)
Liked: 665
This could be the next REW. Nice job 3. Looking forward.
coolrda is online now  
post #10 of 153 Old 12-17-2016, 09:55 AM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
added a suggested feature list to the 1st post, can't promise how long it will take me to get round to writing all this mind you
3ll3d00d is online now  
post #11 of 153 Old 12-17-2016, 11:48 AM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
One feature you'll want is the ability to subtract gravity from the x, y or z axis.

Kinda bummed that it won't be real-time analysis though. Me sad.

Not sure how fast or how much RAM the pi has, but worst-case you could offload it to a network attached PC.
I doubt you'd need to go that far though.

It sounds like you might have done signal analysis coding before? Because if not, you could be in for a long ride.
BassThatHz is offline  
post #12 of 153 Old 12-17-2016, 01:34 PM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
Quote:
Originally Posted by BassThatHz View Post
Kinda bummed that it won't be real-time analysis though. Me sad.
something to add at a later date perhaps, though tbh most of this data is just short clips which you want to characterise as a whole so offline analysis is sufficient as far as I can see.

Quote:
Originally Posted by BassThatHz View Post
It sounds like you might have done signal analysis coding before? Because if not, you could be in for a long ride.
I've got all the basic analysis worked out in some simple scripts, it's pretty much all done by scipy tbh so mostly it's just a case of manipulating the results. I need to get some more data and compare it to (e.g.) speclab to be sure the results are correct but certainly seems to be so far.
3ll3d00d is online now  
post #13 of 153 Old 12-17-2016, 04:53 PM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
Quote:
Originally Posted by 3ll3d00d View Post
it's purely offline analysis with perhaps some visual indicator that a measurement is in progress. As such there are no concurrency concerns on that front.
How do you plan on notifying the browser that the offline analysis has finished?
BassThatHz is offline  
post #14 of 153 Old 12-17-2016, 05:27 PM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
Quote:
Originally Posted by 3ll3d00d View Post
- stop an ongoing measurement
The HTTP responder will be running under a different thread, possibly even a different process than your acceleration analyzer (depending on how you code and host your python modules, that is...)

So how do you plan on "stopping an ongoing measurement" without running into a concurrency issue? As separate threads and/or processes usually don't have shared memory (and are running in parallel). Which will force you to implement a thread-safe IPC.

Not the end of the world, but you'd have to abandon this feature if you don't plan on implementing something to resolve the above inter-process inter-thread issue.

A static singleton getter for the acceleration analyzer module would be a common approach instead of using IPC's. That way everything can be contained in a single host process instead of multiple.

On second thought: I suppose you could do a pkill on it. (That would certainly stop the measurement. )

Last edited by BassThatHz; 12-17-2016 at 07:31 PM.
BassThatHz is offline  
post #15 of 153 Old 12-18-2016, 03:32 AM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
Quote:
Originally Posted by BassThatHz View Post
The HTTP responder will be running under a different thread, possibly even a different process than your acceleration analyzer (depending on how you code and host your python modules, that is...)

So how do you plan on "stopping an ongoing measurement" without running into a concurrency issue? As separate threads and/or processes usually don't have shared memory (and are running in parallel). Which will force you to implement a thread-safe IPC.
sorry, I wasn't especially clear in my earlier post. When I said "no concurrency issues" here, I meant that the system is pretty much a bunch of synchronous operations so there isn't much to think about on that front. This doesn't mean there aren't multiple things happening in parallel though, the main one being the UI as you suggest.

I was going to handle this by having some sort of MeasurementDevice component which has a event driven api and an associated simple state machine to model what is going on in there. This means the UI can submit START or STOP commands and get the current state of the component. This component will run in separate process (as I understand python has a global lock problem) and may end up needing to fork itself into two (one for reading, one for writing) processes. Since there is only one measurement device & all it needs to do is signal when some output is available on disk then the standard Pipe (https://docs.python.org/3/library/multiprocessing.html) looks like it will be sufficient. Moving to some "realtime" visualisation would just mean switching the reader->writer event from a queue to a pub-sub so that an analysis component can see segments of data as they arrive in parallel with the writer. It would be a basic stream processing situation at that point.

I haven't thought especially deeply about any of this though, I was assuming any modern language would give you the tools required to do this sort of thing without having to plan ahead too much....
3ll3d00d is online now  
post #16 of 153 Old 12-18-2016, 06:05 AM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
Quote:
Originally Posted by 3ll3d00d View Post
sorry, I wasn't especially clear in my earlier post. When I said "no concurrency issues" here, I meant that the system is pretty much a bunch of synchronous operations so there isn't much to think about on that front. This doesn't mean there aren't multiple things happening in parallel though, the main one being the UI as you suggest.

I was going to handle this by having some sort of MeasurementDevice component which has a event driven api and an associated simple state machine to model what is going on in there. This means the UI can submit START or STOP commands and get the current state of the component. This component will run in separate process (as I understand python has a global lock problem) and may end up needing to fork itself into two (one for reading, one for writing) processes. Since there is only one measurement device & all it needs to do is signal when some output is available on disk then the standard Pipe (https://docs.python.org/3/library/multiprocessing.html) looks like it will be sufficient. Moving to some "realtime" visualisation would just mean switching the reader->writer event from a queue to a pub-sub so that an analysis component can see segments of data as they arrive in parallel with the writer. It would be a basic stream processing situation at that point.

I haven't thought especially deeply about any of this though, I was assuming any modern language would give you the tools required to do this sort of thing without having to plan ahead too much....
pub-sub particularly lends itself well to web-servicized back-ends. As it allows multi-machine distributed nodes. Probably overkill for a single Pi box though.
That's something that you'd typically see in a b2b integration, SOA, or a cloud-attached super-computer architecture.

NASDAQ for example uses one-way pub-sub pushes for it's global architecture if I recall correctly. But we are talking millions of feeds and desktops and business hedge fund servers and traders. A much larger scale than a single Pi box.

An asynchronous-event model would be a faster way, if you have a (global) controller using that process-lib you referenced and with no webservice backend.

You can do pub-sub without a webservice of course, the only other time pub-sub is handy in that case is when you are trying to loosely couple many active listener classes. Not sure how useful that model would be in a one-off Pi project with a single user (i.e. only 1 subscriber).

You did mention that you were leaning towards web services though.
I know that in c# you can have both async and sync web-services too, not just events/function pointers.

Not sure how python handles events, but in c# they are basically function pointers under the cover.
Events in c# don't ensure thread-safety by default, so when interacting with the UI across threads, you still need to write code like this:
Code:
private void SetText(string text)
{
	if (this.textBox1.InvokeRequired)
	{	
		SetTextCallback d = new SetTextCallback(SetText);
		this.Invoke(d, new object[] { text });
	}
	else
	{
		this.textBox1.Text = text;
	}
}
This is c# windows-forms specific, but asp.net web-apps have a similar issue.
How python handles this I'm not sure, but I'd imagine it would have a similar issue and resolution in a pythony way.

In c# you can have synchronous and asynchronous function pointers / event handlers.
async is thread-safe, and sync isn't (at least in c# land it is that way, I'd imagine python is perhaps similar in this regard... but maybe not.)

I'd imagine that the process-lib you referenced is doing some sort of IPC's under the cover, so needing to worry about thread-safety may not be necessary as it is already handling that for you; allowing you to get away with using natively non-thread-safe sync-events in a multi-process multi-thread environment.

I hope that your planned model works for you without too much headache.

In any case, these some things to think about and prototype up before diving head first or committing deeply to any particular model that may or may not react as first planned.

I've seen a lot of programmers bang their heads against a wall when first diving into the world of web-apps and web-services when coming from traditional command-line/forms backgrounds, or back-end vs UI. (i.e. default single-threaded vs multi-threaded enivro's, and real-time vs disconnected request-reply models.)

Last edited by BassThatHz; 12-18-2016 at 06:15 AM.
BassThatHz is offline  
post #17 of 153 Old 12-18-2016, 06:38 AM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
Quote:
Originally Posted by 3ll3d00d View Post
sorry, I wasn't especially clear in my earlier post. When I said "no concurrency issues" here, I meant that the system is pretty much a bunch of synchronous operations so there isn't much to think about on that front. This doesn't mean there aren't multiple things happening in parallel though, the main one being the UI as you suggest.
Any SSH terminal will have it's own process and threads, both desktop client and Pi hosted ssh server.
The same goes for any browser like Chrome, each tab launching yet another thread.

Then you have your data-analyzer process/thread and measurement-device process/thread, and any given backend web-service process/thread and UI web-server process/thread, all running on the Pi (depending on how it is all coded of course...)

So there is potentially many different processes and threads all interacting with each other here. (and all taking up ram and CPU usage too I might add.)

From a pure efficiency/speed perspective I would do: one web process, that manages the the file-writer thread, the analyzer thread and the measurement thread.
So 1 process and 3 threads. All of which would be a static singleton pattern. Possibly with a factory pattern for loose class coupling if you desire that.
FYI: I would just use that simple Python websockets class, rather than any bloated apache server processes or anything heavier-duty like that.
That would keep it lightweight and as simple as possible. (Likely, eliminating the need for reactjs/flask entirely...)

But that would only work well if Python behaves itself when managing internal threads...
but as you mentioned: perhaps it doesn't... thus forcing you to use the process-lib way instead, and thus 4 processes each with 1 thread. (?)

Last edited by BassThatHz; 12-18-2016 at 07:00 AM.
BassThatHz is offline  
post #18 of 153 Old 12-18-2016, 07:00 AM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
fwiw I've been writing those sort of (large scale high throughput and/or low latency) systems for years, just never had a need to use python before. This means that most of the time I put into this so far seems to be spent working out what the idiomatic python equivalent is. This invariably then sends me off on a tangent, probably should concentrate on getting it done

on the multiprocess thing, Python, or at least CPython, seems to enjoy something referred to as the GIL (https://wiki.python.org/moin/GlobalInterpreterLock) apparently because the memory management isn't thread safe. This means you have to use multiple processes if you want to actually allow things to execute in parallel.
3ll3d00d is online now  
post #19 of 153 Old 12-18-2016, 07:19 AM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
Quote:
Originally Posted by 3ll3d00d View Post
fwiw I've been writing those sort of (large scale high throughput and/or low latency) systems for years, just never had a need to use python before. This means that most of the time I put into this so far seems to be spent working out what the idiomatic python equivalent is. This invariably then sends me off on a tangent, probably should concentrate on getting it done

on the multiprocess thing, Python, or at least CPython, seems to enjoy something referred to as the GIL (https://wiki.python.org/moin/GlobalInterpreterLock) apparently because the memory management isn't thread safe. This means you have to use multiple processes if you want to actually allow things to execute in parallel.
Well in that case... how much harder would it be to just code the whole thing in C++?
You'd have to port NumPy/I2C/websockets etc over to C syntax, but that shouldn't be "too hard", just time consuming...

Then you'd have complete control over the whole world. No silly GIL to deal with.You could then take advantage of pointers, rather than some Python memory manager.

Just a thought!
BassThatHz is offline  
post #20 of 153 Old 12-22-2016, 06:45 AM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
the hard bit for me is doing the signal processing so doing it in python makes sense as it basically does it all for you. Besides using a different language from the day job keeps it interesting
3ll3d00d is online now  
post #21 of 153 Old 12-22-2016, 06:45 AM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
I've been researching the signal processing side of things to check that the libs available can do the job we need without me having to actually write any serious amount of code, seems like the answer to that one is yes (which is nice).

It has got me thinking about the actual requirements here and I think we really have 2 different use cases.

The first use case is system calibration which uses periodic white noise to characterise the response of the system to a "flat" input. We then want to see the response of the system and compare it against a (pre) defined target curve. Ideally the system would then tell us "do this to map to the target curve" (c.f. the REW auto eq window).

To support this we need a linear spectrum graph, basically the vibration equivalent of a magnitude response graph (G vs time). We then want to plot the response on individual axes along with the summed response and the target curve. We also want a graph that shows the difference between those two (probably summed / target). Finally a simple solution to the auto eq problem is to invert that graph and this is a starting point for the filter to apply to your TR devices.

I don't see any need for any other graphs in this situation, a spectrogram is pointless as is any peak hold graph and PSD seems a waste of time too.

Secondly we have real world content. This is fundamentally different to the calibration signal as it is non periodic and tends to have fairly large dynamic range, at least in the passband we're interested in. This means that a linear spectrum is not so useful because it's basically an average of the entire track by frequency and what we're really interested in is the peaks and how it is sustained over time. I think this means that we really want to see a spectrogram, a peak values equivalent of the linear spectrum and possibly the PSD (I'm not really sure whether this is useful in this context tbh but it's easy to provide). I also think we're only really interested in the summed vibration here as opposed to individual axes as the total effect is what we're interested in.

The interesting thing here is that we could also run a wav of the source clip through the same analysis and then present the two side by side and perhaps also show the difference between the two signals. This would then tell you how far away from the reference you are with real world content. I imagine the difficulty in doing that will be aligning the start and end time of the two signals, no idea if it's feasible to do that automatically. It would be a nice feature though.

Thoughts?
coolrda and neo_2009 like this.
3ll3d00d is online now  
post #22 of 153 Old 12-22-2016, 02:38 PM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
The basic analysis functions are implemented along with some simple wav or txt file parsing, details in https://github.com/3ll3d00d/vibe/blo...n/vibe/vibe.py

using the EoT scene as an example; red is peak, green is linear spectrum, blue is PSD



and a spectrogram using a slightly funky colour scheme



these are just example graphs generated using matplotlib in python btw.
Attached Thumbnails
Click image for larger version

Name:	example.png
Views:	464
Size:	235.0 KB
ID:	1848425   Click image for larger version

Name:	spectro.png
Views:	469
Size:	498.7 KB
ID:	1848433  
3ll3d00d is online now  
post #23 of 153 Old 12-22-2016, 03:20 PM
AVS Forum Special Member
 
notnyt's Avatar
 
Join Date: Dec 2008
Location: Long Island, NY
Posts: 8,317
Mentioned: 196 Post(s)
Tagged: 0 Thread(s)
Quoted: 2524 Post(s)
Liked: 2297
If you're using an rpi you can easily measure SPL at the same time as well with a cheap mic capsule.
notnyt is offline  
post #24 of 153 Old 12-22-2016, 06:31 PM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
Quote:
Originally Posted by 3ll3d00d View Post
I've been researching the signal processing side of things to check that the libs available can do the job we need without me having to actually write any serious amount of code, seems like the answer to that one is yes (which is nice).
Many people take this approach. Prototype each module in isolation, and then mix them together later on.
That way the sum of the pieces are "almost" guaranteed to do what you want.

Quote:
Originally Posted by 3ll3d00d View Post
on individual axes along with the summed response
Just make sure you use 3d vector summation, rather than absolute value.

Otherwise your scaling will violate the laws of energy conservation, it will be way off, which will give you nightmares in the target curve and EQ stages.

Quote:
Originally Posted by 3ll3d00d View Post
I imagine the difficulty in doing that will be aligning the start and end time of the two signals, no idea if it's feasible to do that automatically. It would be a nice feature though.
Thoughts?
Not difficult, just play a chirp/burp and time align the peaks. Then play the signal and perform sig-analysis using those relativistic timing offsets.

You may want to use 3 different frequency burps and average the offsets (Say 15hz, 30hz and 60hz). That way frequency related anomalies will be reduced.

Once feature you might want, is to display this offset so that you can calculate and thus add DSP delay to the tactile transducer or sub for time alignment.
BassThatHz is offline  
post #25 of 153 Old 12-23-2016, 05:38 AM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
Quote:
Originally Posted by BassThatHz View Post
Just make sure you use 3d vector summation, rather than absolute value.

Otherwise your scaling will violate the laws of energy conservation, it will be way off, which will give you nightmares in the target curve and EQ stages.
It is not clear what the right approach to summing multi axis vibration is though it is clear that summing it is important if you want to improve the correlation between subjective perception and objective measurements. The relevant standards appear to be BS6841 which I believe evolved into ISO2631. Generally speaking these speak about applying a frequency weighting (which varies by axis and measurement type) and a scaling factor (which varies by axis). Some reports say use both, others say just use the scaling factor. There is also disagreement on the correct scaling factors too.

If I decide to go with frequency weighting then https://www.mathworks.com/matlabcent..._body_filter.m provides some matlab code to generate them, I imagine this can be adapted to python.

One method that I've seen suggested is a root sum of squares method, e.g. as per https://dspace.lboro.ac.uk/dspace-js...REPOSITORY.pdf (which also gives some suggested scaling factors), and another is VDV (vibration dose value), e.g. as per http://www.auburn.edu/~kam0003/347%20Binder1.pdf (which also gives a method to sum VDVs). There's also https://dspace.lboro.ac.uk/dspace-js...ndle/2134/6250 which is someone's PhD on the subject

Quote:
Originally Posted by BassThatHz View Post
Not difficult, just play a chirp/burp and time align the peaks. Then play the signal and perform sig-analysis using those relativistic timing offsets.

You may want to use 3 different frequency burps and average the offsets (Say 15hz, 30hz and 60hz). That way frequency related anomalies will be reduced.

Once feature you might want, is to display this offset so that you can calculate and thus add DSP delay to the tactile transducer or sub for time alignment.
I'm not doing playback from this software, just measurement, thus it's a problem of aligning some arbitrary measurements.
3ll3d00d is online now  
post #26 of 153 Old 12-23-2016, 01:23 PM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
I think that point about the scaling factors would make a diff between measured and target acceleration very hard to implement as it implies "start here for your eq adjustment" but it won't have that effect (even if the seat responds linearly).
3ll3d00d is online now  
post #27 of 153 Old 12-25-2016, 06:04 PM
AVS Forum Special Member
 
BassThatHz's Avatar
 
Join Date: Apr 2008
Location: Northern Okan range (NW Cascades region)
Posts: 7,542
Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 2220 Post(s)
Liked: 1877
The heavier and more dense the object, and/or the more lossy... the less the vibs will scale (of any method).
The scaling would literally have to change from material to material and object to object.

Getting this to be automated will be impossible. It will probably only work for you, in your room, with your system/objects.

Not sure what you have for a floor, but mine is 24inches of reinforced concrete. The vibs I get are typically air induced only.
Someone with a 2x4 floor on a 2nd-story rickety 200yr old house, will have far more vibs (wanted or unwanted).

Unfortunately your app won't translate at all from house to house, or even basement to upstairs.
The scaling would have to be manual and arbitrary. That's just the way highly-variant mechanical systems are. Is what it is.


I have no idea how linux usb audio works, but I'd imagine someone has solved this python PI audio problem already (and likely open-source).
Just a matter of doing it. You are already going to all this work to DSP streams of data and wave files. Having an audio I/O is really not much more work.
The chirp/burps could be either static wave files you pre-build or based on a sine-generator.

All the cellphone vib apps have real-time monitoring.
Not having this app be real-time nor interactive really dampers the sex appeal and usefulness IMO.
People are impatient these days, they like to see the results in the now. They want to boom their subs while looking at real-time charts updating.

Besides, being able to adjust the scaling in real-time to match targets would be EXTREMELY handy... otherwise the process will be slow and painful.
Run, look at the offline results. Nope, insufficient.
Re-run, look at the results. Nope, still insufficient.
Re-run, look at the results. Nope, still insufficient.
Re-run, look at the results. Bangs head against wall etc etc

Instead of just:
Run it; and watch it immediately meet or not meet expectations.

It just needs to do 1 or 4 updates per second @ 1 to 16k.
It's not like it has to do 120fps @ 512k...

Last edited by BassThatHz; 12-25-2016 at 06:08 PM.
BassThatHz is offline  
post #28 of 153 Old 12-26-2016, 02:52 AM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
Quote:
Originally Posted by BassThatHz View Post
The heavier and more dense the object, and/or the more lossy... the less the vibs will scale (of any method).
The scaling would literally have to change from material to material and object to object.

Getting this to be automated will be impossible. It will probably only work for you, in your room, with your system/objects.
the scaling and summation is to account for human perception of vibration, the material through which it vibrates is irrelevant. You may want to read the links I provided.

Quote:
Originally Posted by BassThatHz View Post
Not sure what you have for a floor, but mine is 24inches of reinforced concrete. The vibs I get are typically air induced only.
there is no known DIY way to measure pressure response directly, this is for vibration only.

Quote:
Originally Posted by BassThatHz View Post
Unfortunately your app won't translate at all from house to house, or even basement to upstairs.
I know. Why do you think this is relevant?

Quote:
Originally Posted by BassThatHz View Post
I have no idea how linux usb audio works, but I'd imagine someone has solved this python PI audio problem already (and likely open-source).
Just a matter of doing it. You are already going to all this work to DSP streams of data and wave files. Having an audio I/O is really not much more work.
The chirp/burps could be either static wave files you pre-build or based on a sine-generator.
I know it can be done, I have no known need for it though as I have no intention of using the rpi as the audio source.

Quote:
Originally Posted by BassThatHz View Post
All the cellphone vib apps have real-time monitoring.
Not having this app be real-time nor interactive really dampers the sex appeal and usefulness IMO.
People are impatient these days, they like to see the results in the now. They want to boom their subs while looking at real-time charts updating.
this made me chuckle as the expected audience for this app is somewhere close to 1. The number of people using VS regularly today on AVS is in the single digits and that's pretty much completely trivial to use. As such I'm not expecting a large audience for a solution that involves buying an rpi + an accelerometer on breakout board, wiring it together and manually installing and configuring a python app.

Quote:
Originally Posted by BassThatHz View Post
Instead of just:
Run it; and watch it immediately meet or not meet expectations.

It just needs to do 1 or 4 updates per second @ 1 to 16k.
It's not like it has to do 120fps @ 512k...
speaking as someone who has calibrated a nearfield setup using an accelerometer and an RTA view of its response, I can say that an RTA is nice to have but far from essential when calibrating. The process can involve multiple seating positions, multiple measurements (TR and FR), you may not have the ability to make real time changes to your EQ anyway and deciding what to do can be an offline process (i.e. requires thinking time). RTA also has the downside that it tends to make it hard to remember what you've changed and when you changed it so repeatability is difficult. RTA view is good for a quick sanity check and some simple twiddling (e.g. level setting). Finally RTA view is completely useless for comparing real world content clips.

These are the same arguments as to why people use sweeps rather than RTA in REW btw so there is nothing new to see here.
3ll3d00d is online now  
post #29 of 153 Old 12-26-2016, 04:03 PM - Thread Starter
AVS Forum Special Member
 
3ll3d00d's Avatar
 
Join Date: Sep 2007
Location: London, UK
Posts: 2,860
Mentioned: 98 Post(s)
Tagged: 0 Thread(s)
Quoted: 1636 Post(s)
Liked: 601
put together the code to talk to the device, enable/disable sensors, use the onboard FIFO and also streams data out to a handler callback in chunks to facilitate the multiprocess handling. I added a bunch of unit tests so it should work if the device behaves as expected (famous last words) -> https://github.com/3ll3d00d/vibe/blo...ibe/mpu6050.py

I also have the device itself in hand, it's surprisingly tiny



need to work out the wiring next and hook it up next
Attached Thumbnails
Click image for larger version

Name:	mpu6050.jpg
Views:	429
Size:	129.5 KB
ID:	1854673  
3ll3d00d is online now  
post #30 of 153 Old 12-27-2016, 11:17 AM
Member
 
andy497's Avatar
 
Join Date: Jan 2013
Location: Michigan
Posts: 118
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 15 Post(s)
Liked: 23
I'll be very interested in what kind of data you get out of that MPU-6050 (besides also being very interested in the project in general). Those units are very popular in DIY quad coptor flight controllers. They are typically pretty noisy and with biased distribution, so the the best data comes after sensor fusion/kalman filtering with multiple other sensors like gyro/gps/barometer. Hopefully this application has much higher SNR and that's not a problem. You'll have the advantage of being able to measure a stable periodic signal where you can average samples.

Also, I think those chips advertise a sample rate of 1000 hz, but they may or may not have a non-defeatable hardware digital filter operating at 256 hz. Even keeping the test signal frequency well below that, harmonics will be getting in from everywhere and may flood the measurement with aliasing (i.e. 2*nyquist only applies with band-limited input). Or not. The DIY quad folks are making great strides in autonomous flight, and you can imagine a plate with four spinning blades on it is pretty riddled with high frequency vibration.
3ll3d00d likes this.
andy497 is offline  
Sponsored Links
Advertisement
 
Reply DIY Speakers and Subs

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off