![]() Lane-line sensor currently only supports CyberRT message types. Note that this correction will only affect the sensor visibility and will not change the annotation data itself. An example of how offset map annotation data can look before (blue and yellow lines) and after automated correction (green lines) is shown below. To use automatic line-correction, open the lane-line detector tool and use it with the Generate Line Sensor Data option enabled. Please note that the results may vary based on the intensity map's quality. This uses image processing and will attempt to align annotation keypoints with lines detected on road intensity maps. ![]() If you have no option to improve the alignment, you might want to try using the automated correction option. In some cases, annotation data imported from external sources might not match the environment perfectly. Points used for curve approximation (red dots) are based on white and yellow annotation lines. An example of how the annotation data is resampled before final processing is shown on the image below. Sampling density depends on SampleDelta parameter that can be defined in JSON parameters. Since annotation data can be relatively scarce, each segment is resampled along its length before the polynomial regression step. All of the spatial data and metadata for published lines will be based on this, which eliminates issues related to image processing, but requires precise annotations. Input Data top #īy default, the lane-line sensor will use lines defined in the map's annotation data. Details about how the points are sampled can be found in the input data section. Input data for the polynomial regression is sampled directly from map annotations for the given environment, trimmed to the sensor's field of view ( FOV on the referenced image) and defined visibility range (see JSON parameters for more information). This is mostly noticeable on steep curves. It's important to note that polynomial coefficients are calculated via polynomial regression, which means the final function is an approximation and might not match the input data (red dots) perfectly. Given these parameters, the curve function f(x) can be defined as: f(x) = a + b * x + c * x^2 + d * x^3 for x ∈ On the referenced image, longitude_min and longitude_max are described as MinX and MaxX, respectively). Image below shows the described coordinate space.Įach curve is defined by six values - a, b, c, d, longitude_min, longitude_max - as defined in LaneLineCubicCurve. This coordinate space uses only two dimensions, ignoring altitude. Curve Definition top #Įach lane-curve is described as third-degree polynomial, with coordinate space being centered on the sensor, the x axis pointing towards its front, and the y axis pointing towards its right side. Please note that even though lane-line sensor visualization in the simulator shows detected lines as an overlay for a color image, the image itself is not part of published data and is only shown as a visual aid. * See Curve definition section for details The line curve in sensor space defined as third-degree polynomial * Parameterĭescribes the color and shape of the line (white/yellow, solid/dotted)ĭescribes the position of the line in relation to the EGO vehicle (right/left, ego/adjacent/third/etc.) Fields populated and published by the simulator are described in the table below. The lane-line sensor currently only supports the Cyber bridge, and publishes data in a format compatible with Apollo 5.0.Įach message follows the perception_lane format and, aside from the header, contains data about one or more lines in a format compatible with CameraLaneLine. You can read more about how annotation data is used in the input data section. It uses ground truth data from map annotations, which can be optionally corrected. The lane-line sensor is used to extract and publish data describing the position and curvature of road lines on the lane which EGO vehicle currently occupies. Configuration file and command line parameters.Mapping a simulation environment in ROS 2.Robot simulation with ROS 2 Navigation Stack.Running a basic Robotics Simulation with ROS 2.Viewing and subscribing to ground truth obstacles.Modular testing with the Apollo AD stack.Running Linux GPU applications on Windows.
0 Comments
Leave a Reply. |