Functions and blocks list

Decision Making

1.1. Calculate distance and angle

Inputs: 2 inputs 2 element vector:

  • position x1[px]
  • position y1[px]
  • position x2[px]
  • position y2[px]

Parameters: none
Outputs: 2 element vector: distance[px], angle to target[rad]
Sample rate: inherited
Demo: demo_command_simple_robot

This function calculates distance (in inherited units) and direction from point 1 to point 2.

1.2.Map markers to robot positions

Inputs: 160-element vector [is_valid, marker_color_id, xc, yc, area]x32
Parameters: none
Outputs: [is_valid, x,y,angle]x16
Sample rate: inherited
Demo: demo_process_image

This function searches marker list looking for markers matching objects described in process_object_descriptions.m file. See "process image into raw markers positions" for marker list format description. Output is 64-element vector. Use "reshape(output,16,4)" to get user-readable list. Each quadruplet of [is_valid, x,y,angle] corresponds to one line in configuration file, i.e. 3-rd quadruplet is object 3.
"Is_valid" denotes if object was found. 1=found, 0=not found, 2=object ambiguous (more than 1 object of this kind found in markers list). [x,y], positions are in pixels, angle is in radians, referencing to local coordinate system. See "Coordinate system used and angle calculation" for detailed description.

Process_object_description.m file

Format for object description vector is:
[object_id object_type designator_1 designator_2 orientator
adjust_u adjust_v adjust_theta area_min area_max designator_distance_max orientator_distance_max];

  • object_id - self describing
  • object_type - 1=simple designated (signle designator marker) 2=double designated (two markers for designation)
  • designator_1, designator_2, orientator - id of colour markers, must be the same as in testcolors.txt - provided by process_image_to_raw_obj_pos or other image processing routine
  • adjust_u - move object centroid horizontally relative to designator center - not yet implemented. May be used later to correct real robot position (to move detected position towards robot's center of rotation)
  • adjust_v - move object centroid vertically relative to designator center
  • adjust_theta - rotate object centroid by this angle(radians)
  • area_min, area_max - discriminate markers if their area is outside this range
  • designator_distance_max - only meaningful when object_type=2; if using two-LEDs designator, the maximum distance from each marker in designator to other marker.
  • designator_distance_max if object_type=1 means range in whih other objects are NOT expected to be in order to classify designator as valid. Note that this may not work in current implementation because I have modified algorithm to be robust against very small markers, which may appear in some versions of accelerated image processing routine
  • orientator_distance_max - maximum radius in whih orientator marker is looked for.

Note: these distances are very important as there will be multiple similar markers in view!

1.3.Process image into raw markers positions

Inputs: image in a vector of 921600 uint8.
Parameters: none
Outputs: 160-element vector  [is_valid, id, xc, yc, area]x32
Sample rate: inherited
Demo: demo_process_image

This block processes image and returns a constant-length vector that contains data about found markers. It is a wrapper to F. Wornle "imgProcSilent"  function from CMU1394_VisionTools toolbox.
Outputs vector 160-element vector, [is_valid, id, xc, yc, area]x32. Use "reshape(output,5,32)" to obtain readable version.

  • is_valid - denotes if particular entry is a valid (found) marker.
  • Id - colour id, as configured in testcolors.txt. First line is id=1, second is id=2 etc.
  • xc, yc, area - marker parameters, in pixels.

This block uses testcolors.txt as an configuration file for imgProcSilent. See CMU1394_VisionTools documentation for details about this file.

1.4.Wrap angle

Inputs: angle[rad]
Parameters: none
Outputs: wrapped angle[rad]
Sample rate: inherited
Demo: demo_wrap_angle.mdl

This function ensures that angle is normalized to +-pi; this may be helpful when determining difference between two directions.

2.Measurement Input

2.1.Add dummy image to vis window

Inputs: none
Parameters: image file name.
Outputs: 921600 element vector containing 640x480x3 image.
Sample rate: 0.1s
Demo: demo_test_object_recognition.mdl

This block adds a still image to visualization window underlay. Supported file types are bmp, png and jpg.

2.2.Capture video and add video underlay

Inputs: none
Parameters: mode:RGB, YUV
Outputs: 921600 element vector containing 640x480x3 image.
Sample rate: 0.1s
Demo: demo_video_underlay.mdl

This block adds camera feed to visualization window underlay. It requires that firewire camera is connected to the computer.
There are two modes for camera operation: mode=1 is 640x480 RGB, 15fps; mode=0 id 640x480 YUV411 30fps. The former one can yield better detection resolution, the latter one can yield lower delay. YUV411 means that there is luminance value for each pixel, but color is sampled only once per 4 pixels. This yields lower resolution, but faster operation.

2.3.Combo: capture image, add video underlay, process to raw markers

Inputs: none
Parameters: mode:RGB, YUV
Outputs: as in "Process image into raw markers positions"
Sample rate: 0.1s
Demo:

This block is an conglomerate of basic blocks. This has been found to speed up processing considerably. You can use this block exactly as you would use a series connection of "capture video and add video underlay" and "Process image into raw markers positions".

3.Simulation

3.1.Passive object model

Inputs: force [N*px/m]
Parameters:

  • Mass [kg]
  • Friction [N*px/s]
  • Starting position x, y [px]
  • Starting speed x,y [px/s]

Outputs:

  • 2 element vector containing position x[px], position y [px]

Sample rate: continuous (inherited from output)
Demo: demo_simple_robot.mdl
--

 

--

3.2.Simple robot model

Inputs: speed [px/s], turn rate[rad/s]
Parameters: start position x[px], start position y [px]
Outputs: 3 element vector containing

  • current x [px]
  • current y [px]
  • current angle [rad]

Sample rate: continuous (inherited from output)
Demo: demo_simple_robot.mdl

This block integrates turn rate into orientation, then rotates speed as a vector into this direction. Then it integrates rotated vector into position. It is not modelling any transitient behavior (ex. inertia).

4.Supervisory control

4.1.Add robot symbol to vis window

Inputs: 3 element vector consisting of position x[px], position y[px], angle [rad]
Parameters: Object ID
Outputs: none
Sample rate: inherited
Demo: demo_simple_robot

Note 1: Object ID should be unique trough model. Otherwise results are unpredictable.

4.2.Add passive object to vis window

Inputs: 2 element vector consisting of position x[px], position y[px]
Parameters: Object ID
Outputs: none
Sample rate: inherited
Demo: demo_kickable_ball

Note 1: Object ID should be unique trough model. Otherwise results are unpredictable.
Note 2: As every visualistaion object shares the same visualization window, you can open 2 distinct models (for example, demo_kickable_ball and demo_simple_robot) and run them independently. Resulting objects will be merged and shown in one visualization window.

4.3.Get command point

 

Inputs:none
Parameters: vector containing id's of buttons.
Outputs: 5-element vector [trigger, id, x, y, angle]
Sample rate: 0.1s
Demo: demo_input_point

This block allows user to click within figure to designate point's coordinates and orientation. Coordinates and orientation is then outputted along with selected button id.

You must supply buttons id's in configuration dialog. For example, specifying [1 2 3] gives you three buttons:

----