You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
center = Point([self.observation["longitude"], self.observation["latitude"]]) # Convert longitude and latitude to GeoJSON Point
altitude = float(self.observation["ascent(feet)"]) * 0.3048 # Convert ascent to meters
fov_atan = math.tan(self.fov)
diagonal_distance = altitude * fov_atan
distance = diagonal_distance / 2
bearing = (float(self.observation["compass_heading(degrees)"]) - 90) % 360 # Calculate bearing
offset = math.atan(self.video_height / self.video_width) * 180 / math.pi # Calculate offset based on video resolution
# Calculate GPS coordinates of the video's four corners
options = {'units': 'm'}
self.top_left = rhumb_destination(center, distance, (bearing - offset + 180) % 360 - 180, options).geometry.coordinates
self.top_left = self.top_left[:: -1]
self.top_right = rhumb_destination(center, distance, (bearing + offset + 180) % 360 - 180,options).geometry.coordinates
self.top_right = self.top_right[:: -1]
self.bottom_right = rhumb_destination(center, distance, (bearing - offset) % 360 - 180, options).geometry.coordinates
self.bottom_right = self.bottom_right[:: -1]
self.bottom_left = rhumb_destination(center, distance, (bearing + offset) % 360 - 180, options).geometry.coordinates
self.bottom_left = self.bottom_left[:: -1]
# print the GPS coordinates of the video's four corners
self.current_position = [self.observation["latitude"], self.observation["longitude"]]
normalized = [self.dy - self.video_height / 2, self.dx- self.video_width / 2]
distanceFromCenterInPixels = math.sqrt((self.video_width / 2 - self.dx) ** 2 + (self.video_height / 2 - self.dy) ** 2)
diagonalDistanceInPixels = math.sqrt(self.video_width ** 2 + self.video_height ** 2)
percentOfDiagonal = distanceFromCenterInPixels / diagonalDistanceInPixels
distance = percentOfDiagonal * diagonal_distance # in meters
angle = math.atan(normalized[0] / (normalized[1] or 0.000001)) * 180 / math.pi
# if the detection is in the right half of the frame, we need to rotate it 180 degrees
if normalized[1] >= 0:
angle += 180
point = rhumb_destination(center, distance, (bearing + angle) % 360, options).geometry.coordinates
point = point[:: -1]
Certainly! Isuccessfully rewritten the code in Python, and it's functioning well when the gimbal pitch is set to -90. If I want to extend the functionality to handle cases where the pitch value is different from -90, what should I do ?
The text was updated successfully, but these errors were encountered:
ghost
changed the title
Camera is not looking downward
handle cases where the pitch value is different from -90
Dec 19, 2023
Sounds hard to me because in addition to much harder math you’d have to also account for the topology of the terrain (imagine a mountain in front of you; the georeferencing would be much different than flat land or a valley).
I am considering a flat surface for my example, In the previously mention code, the assumption was made that the drone is positioned at the center of the image. However, when the pitch angle of the gimbal differs, an offset is introduced, influencing the drone's apparent location within the frame. I cannot figure out the mathematics behind it.
Certainly! Isuccessfully rewritten the code in Python, and it's functioning well when the gimbal pitch is set to -90. If I want to extend the functionality to handle cases where the pitch value is different from -90, what should I do ?
The text was updated successfully, but these errors were encountered: