Bye bye laptop? Am I ready?

There has been some really great ideas put out here by members on how they use the second layers on the SC5000’s as a sampler. For example grabbing a bunch of samples, putting them all on one track (with third-party software) and then using the cue pads to trigger them etc. I realize that this would require additional work but it is a neat idea IMO. I personally don’t think any controller type players out to date offers anything more than the Prime 4 when you take in to account the price but as you rightfully said, I admit it is and should be a personal preference :slight_smile:

Best of luck with your decision. It is a tough one.

1 Like

They didn’t say it couldn’t deal with a bigger drive, only that they’d only put a 1TB version in it so far.

The Primes all apparently have wifi circuitry inside, it’s just not activated yet.

I assume with the claims of overbuilding the hardware, that there’s plenty of arm processor and memory to do sampling.

I am so curious what the search speed is when it has to manage 50K+ songs … let’s say within that 1TB drive. I’m sure the P4 has the potential to be awesome. I just wait a bit and see what happens in the real world.

This!

The most important factor is not the features of the controller. It is if the controller features fit your workflow, your preferred software (which could, in this case, be the built-in software) and your budget.

Yes, no sampler could easily be construed as a glaring omission. Frankly, I never use them, but that is no reason to not include them :-D. It being a computer in a box, I have little doubt they can program it in if they want to.

Online streaming. With the amount of usb-ports on the unit, giving one up for a wifi-dongle or usb-to-ethernet adapter shouldn’t be too big of a problem (by the way, who says there isn’t wifi in the box, just dormant at the moment?) and the same “computer in a box” argument is valid. They should be able to add that through firmware as well. Mind you, most streaming services still don’t allow you to use their tracks in public. Although that apparently is changing slowly.

Not sure what’s in there, but if the processors are the same as the 5000s (and why wouldn’t they), that should be plenty of processing power.

1 Like

Indexing being the key here. With a proper index, searching speeds should not deviate much, regardless of the number of tracks involved. It’s when you are searching the actual file list that things start to slow down.

1 Like

Being honest we can only speculate here as we (at least I) don’t know the details of the library db, but from observation, I suspect that is the case.

I wrote this comment in Sluggish library browsing with huge library.

It looks like about 20% of the time it takes to perform one operation is access to the drive, and 80% processing. Ie: I navigate to one album. The read LED of the drive flashes for less than 2 seconds. It will then stop flashing but it will take about 5 seconds to show results.

I don’t think hardware processing power is the issue as the SC5000 are powerful machines, and more capable of excellent pitch stretching on two tunes simultaneously, that I guess (speculating again) requires more processing power.

Also, in my limited experience navigation times on an SD drive are not much better than in mechanical drives, so I suspect it’s more about indexes, data structures and that kind of things.

`

I’ve had a quick look at the indexes created on the PC version of Engine Prime - I suspect this is where the problem is as there are some indexes, but they only reference 2 fields - track Id and another value (e.g I can see filename as one, path as another etc).

The thing is, for searching, it needs to search track name, album name, artist (and potentially other fields in future EP versions such as the promised search in comments feature).

The indexes mentioned above are not enough, so the code will not be able to use them so the execution plan would be much slower than if they created an index containing track Id, album name, artist name, filename, track name etc.

There are also no indexed views for tracks, whereas there are for crates, playlists etc, which is why loading a crate is relatively fast compared to searching. An improvement would be similar to the index mentioned above - create an indexed view with the track Id as the unique field, artist, album, filename, track name as other fields.

If anyone is curious, I obtained this information through Microsoft SSMS and the SQLite/SQL Compact Toolbox plugin.

1 Like

No secrets anymore :smiley:

@JonnyXDA I’ve opened m.db, which seems to be the DB file used by the SC5000s with a SQL lite client and it seems it’s properly indexed.

The key table seems to be MetaData, which encodes data such as Album, Genre etc in a key-value fashion. This I guess it’s the column more actively used while searching. The id row is the foreign key to the Track table and everything it’s indexed.

Having all the info for a track in the same column with a value telling what is it (album, song title, artist etc) means that for every navigation step they need to bring all the metadata for a track and then filter by key. They cannot limit the projection of the query with the needed data as you don’t know what data is that until you filter by key.

So, when navigating from one album to another, they need to query within all the metadata of all the tracks, filter by key==album, and then filter by album name. Also, the there is no numeric id for album, so filtering by album implies string matching instead of numerical comparison (this is inherent to the audio tags system, where there is no id for album in the first place and albums are defined by the string in the album tag)

It has surprised me that the DB has a column for comment (key = 5) and record label (key = 6). Hopefully we could see those as part as the search query some day, although currently it seems the search query is too heavy even when filtered out.

@garrapeta - thanks, seems I was looking at the wrong table so I stand corrected :slight_smile: . Though I do think a further index that encompasses all 3 fields should be used (or at least type and text as type seems to be used in the where query to block some key’s from being searched) - currently the indexes are only on each individual field so there should also be an “index_Metadata_id_type_text” or “index_Metadata_type_text” as I still don’t think the current ones are being used when searching - unfortunately I can’t check this at the moment as my goto SQL inspection tool (SQL Doctor) doesn’t work with SQLite databases due to SQLite not being an RDBMS.

It also seems like this is not the full story because if you search for a track on the SC5000, you can also see the BPM and Key (which are taken from the Track table) which could cause slowdowns depending on how they’re doing the search - they could be doing this a couple of ways:

  1. The key-value search, bringing back the track id, then searching for that track id and getting the rest of the metadata, then going to the Track table and getting the BPM and Key information (and possibly track length as the metadata could be incorrect for this).

  2. Something along the lines of below but bringing back more information

SELECT
m.id,
m.text
FROM MetaData m
Inner join Track t on t.id = m.id
where m.text like '%Meatloaf%'

The first option is inefficient for the obvious reason and the interesting thing about the second option is that the first time I ran this query on my laptop, it took 2.069 seconds - whereas without the join it takes 0.025 seconds. So just from 1 join there is an addition of over 2 seconds on a PC - I can easily see this translating to double or triple the time on the SC5000 hardware. I should note that this disappears if you run the query again as the previous query is cached so it becomes 0.028 seconds. However, this is not what we want for DJ’ing as its highly unlikely we would search for the same thing more than once in a short space of time.

I think finding a way to put the needed information for searching and bringing back relevant data into an indexed view would be a good choice here as you would eliminate either additional trips to the database, or costly table joins.

Edit: There is a more efficient query I’ve just thought of for option 2:

SELECT
m.id,
m.type,
m.text,
t.length,
t.bpmAnalyzed
FROM MetaData m
Inner join Track t on t.id = m.id
where m.id in (
Select id
FROM MetaData
WHERE text like '%Green Day%'
)

This one completes in 0.099 seconds but would require further processing in code to get the data into the correct format to be displayed onscreen.