-
-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(sequences): create a table for easy detection aggregation #405
Conversation
Hello @frgfm , thanks for this PR. It's elegant, I like the idea! It meets one of my other needs too: I think that on the platform it is necessary to display the last 10 detections to display the most recent information, but by doing that we will not have the time of the first detection, it will be the case now. There's just one thing missing, and that's taking azimuth into account when creating your streams: a single camera returns several streams at the same time (one per viewpoint). What's left is to imppleterment fetch_unlabeled_detections which returns: The max 15 last unacknowledged streams received on day {from_date} and for each of these streams the last 10 detections. |
Easy fix, no problem
Sure, I wanted to cover that in another once we're all on the same page for the data model (the implementation will quite quite trivial since I've already done it twice) |
all good then ! |
it might be a matter of taste but I think that stream is unclear and we should call it a wildifre :) otherwise : thanks, seems really clean |
Oh yeah, I wasn't fixed on the naming of the table. I was actually thinking of sequences, which is the most accurate (remember the detections may not have been confirmed yet, and a wildfire could be spotted on multiple cameras but here it's basically a set/list/sequence of detections from the same camera) |
@MateoLostanlen just one thing I just realized about adding azimuth to this table and the algorithm:
To me, this raise two questions:
My recommendation, considering the rarity of the last case would be to not assume we have the same azimuth in the same sequence and just not put it in the table (retrieved dynamically from the detections if needed). No split of sequence for now. What do you think? |
@frgfm The azimuth sent is the azimuth of the center of the camera, not the one corresponding to the detection. Therefore it do not change :) The point is to identify the viewpoint here The detection azimuth is computed by the platforme using the bbx later :) |
If there is multiple detection on the same viewpoint, let’s have a single stream I will manage it on the platform |
Alright, so this means you fill the value of camera/azimuth with that in mind? (when a sequence is created, can we trust to take the value from camera.azimuth?) I'll have everything to proceed with that response. I'll do 2 PRs: this one to correctly implement the sequence mechanism, then another to add platform-specific routes |
Yes engine send the camera azimuth center at each detection. Therefore in case of a ptz camera we know the viewpoints that sent the detection |
One last thing that got me worries (non blocking here but still): are you saying that the azimuth in the detection is not the azimuth of the camera angle when the picture was taken?
|
either a static camera or a ptz camera witch patrol trough n position will send the azimuth of the center of the camera it's always the same basically Then on platform side using the center azimuth, the camera FOV and the bbox we deduce the smoke azimuth but you don't have to worry about that on the api side |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #405 +/- ##
==========================================
+ Coverage 84.85% 85.56% +0.71%
==========================================
Files 35 38 +3
Lines 997 1053 +56
==========================================
+ Hits 846 901 +55
- Misses 151 152 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
As discussed, this PR goal is to resurrect the concept of Events from the legacy data model.
Here is how I implemented this:
Data model
I went minimal after thinking seriously about it and selected :
This way, we can get the geo metadata from the camera, and the azimuth & cie from the detections. The started_at = created_at of the first detection in the sequence, and last_seen_at = created_at from the last.
I designed it this way to avoid adding event_id / sequence_id / stream_id in the detection table (which is quite "pure" for now)
The way I see this is that by fetching a stream, we can quite easily in SQL fetch all related detections. Considering time vs ID link is very useful for the next part around the creation/update logic as I though of a few options
Creation/update logic
Here is the algorithm I came up with (goal is to have limited scope but data we can trust):
Review suggestion
Go by commit:
(the others is just utils)
I started with the minimal version, and haven't implemented the tests yet. Any feedback is welcome!