Announcement

Collapse
No announcement yet.

Color sensor calibration doesn't work

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Color sensor calibration doesn't work

    SOLVED

    TL;DR

    - The color sensor has a narrow range of optimal performance, within 1/2 a brick (4mm).
    - If you're not getting good readings from the color sensor, try raising/lowering it by 1/2 a brick.
    - Dean posts a detailed explanation of how color sensor calibration works.

    ======================

    We are trying to use Color sensor -> calibrate -> maximum intensity [100], but it doesn't seem to do anything.

    Sensors are ~8mm from the mat, with a shroud. We place the robot over the Home Base white area. The sensors read ~80, ~70 respectively. After we run the program, the color sensor values remain the same. Running "color sensor -> calibrate -> minimum intensity [0]' over the black line also doesn't change the sensor values.

    The color sensors perform fine, otherwise.


    Are we missing something?

    Last edited by mageus; 11-18-2017, 10:40 PM.

  • #2
    You are thinking about calibration completely backwards. Setting the maximum intensity to 100 doesn't do anything. This is what you should expect. The same is true for setting the minimum intensity value to zero. Here's why:

    The calibration blocks let you specify what intensity value should be reported as 100% or 0%. Say my uncalibrated sensor returns 80 for white and 5 for black. I set the maximum calibration value to 80 and the light sensor begins reporting white as 100. The sensor still sees white as 80, but the reported value is rescaled (and offset) to return 100. Next I resample my sensor value for black (because it could change) and it reads 4. I set the minimum calibration value to 4 and the sensor begins reporting black as zero.

    It is a good idea to always reset the calibration before sampling values to be used for calibration. I think it is also a good idea to set calibration using values that were sampled under the current calibration. You will get the best results using this pattern:

    Reset Calibration
    Read Bright (white) intensity
    Set Maximum calibration to the bright intensity
    Read Dark (black) intensity
    Set Minimum calibration to the dark intensity

    Some teams like to use this sequence:
    Reset Calibration
    Find Minimum and Maximum intensity values.
    Set Maximum calibration using Maximum intensity
    Set Minimum calibration using Minimum intensity

    This works Ok in most cases because your uncalibrated dark reading is usually close to zero and the uncalibrated bright readings close to 100. The further your sensor is away from this ideal the less successful this type of calibration will be. The reason for this is the EV3 recomputes calibration scaling and offset based on the provided calibration intensity AND the current calibration scaling and offset. For the math to work right you need to take the intensity value using the current calibration.

    One other note: There is only one calibration that is used for all color sensors attached to the EV3. You cannot use calibration to make your two sensors match Normally this doesn't matter very much, but it can if you are depending on intensity values within 5-10% of some threshold value.
    Last edited by Dean Hystad; 11-17-2017, 03:46 PM.

    Comment


    • #3
      Thanx for the reply.

      Just, to clarify:
      - The calibration function is just specifying the offset applied to the actual sensor value, for when it reports the sensor read value.
      - It doesn't actually look at the sensor value at the time the calibrate block is run.
      - If my sensors read '80' and '70', and I specify a max of 80, the new sensor values should read '100' and '90' (assuming it's just an additive correction).
      - the calibration reset function just sets the offset to 0, which makes the sensor read block report the raw/uncorrected sensor values.

      We run into color sensor issues at practice because the time of day affects the white-black-white line intensity, and I don't have a lamp directly over the board. Fortunately, we didn't have problems at quals with the bright gym lights.

      Comment


      • #4
        Close, but not quite.

        The light calibration block has no idea what sensor you are calibrating. Notice that the port input goes away when calibration is selected.

        Calibration changes offset AND scale. The reset function sets these back to default values (probably 0 and 1). Setting calibration minimum or maximum adjusts both values.

        It is unusual that ambient light is affecting your sensor readings. The EV3 color sensor compensates for ambient light and it does a pretty good job. Really bright lights (like daylight) are too much for the compensation scheme to overcome, but I have tested under many different artificial lights and the change in reported intensity value has always been pretty small.
        Last edited by Dean Hystad; 11-17-2017, 11:26 PM.

        Comment


        • #5
          Dean is right. It wasn't the ambient light.
          We changed to larger wheels for the regional tournament, to gain time and try to squeeze out one more mission. No matter which height they put the color sensor on, we would only get max 70-80 reflected light intesnity.
          It turned out the new wheels/tires raise the robot by a little over 1/2 brick. It's a commonly quoted rule that the color sensor should be 8-16mm from the ground. By our testing, there is only a 3mm range within which the color sensor is reliable; readings drop off quickly outside that range.
          There's this one Technic brick that does a 90deg turn and staggers the holes by 1/2 brick. That did the trick. Now we get 100% for white, and 5-8% for black.

          Comment


          • #6
            It is those apparently unrelated things that jump out to bite you. I was at a tournament with a team today and their first table run was a disaster. We ran the robot on a practice table and all was well. Then we had technical judging and it was disaster again. Turns out the competition tables and judging tables had 2x4 walls. The practice table and our table have 2x3 walls. A wire was preventing the robot from squaring up properly and this wasn't obvious at first. Once we fixed the problem all was well. Who would think wall height was important?

            Comment


            • #7
              Originally posted by mageus View Post
              By our testing, there is only a 3mm range within which the color sensor is reliable; readings drop off quickly outside that range.
              There's this one Technic brick that does a 90deg turn and staggers the holes by 1/2 brick. That did the trick. Now we get 100% for white, and 5-8% for black.
              The sensor becomes less sensitive the greater the standoff. I recommend teams use higher standoffs so small standoff variations don't have such a big effect on the intensity value. This also evens out the readings over a larger area and makes it less likely the light sensor will report a little black dot as a thick black line. Take a look at this thread to see results of an experiment I performed to see the effect of standoff on intensity:

              https://forums.usfirst.org/forum/gen...ensor-standoff

              Comment


              • #8
                Dean,

                That's an interesting experiment. Increased specificity at the expense of sensitivity is a quality of all tests, and you demonstrate that well. However, I don't think most teams care about this (nor should they). There is enough contrast between the white/black and rest of the board that one can choose the highest sensitivity (the sweet spot in standoff height), and then have a wide enough threshold to exclude other parts of the board. Especially since the white lines have this sparkly texture that increases their reflectance. We found 20% intensity threshold effectively excludes everything else on the board.
                By sensitivity, I mean the standoff that provides you with the highest intensity reading for the white line. By threshold, I mean the number you specify in comparing color sensor -> compare -> reflected light intensity.
                Now, if one is trying to clarify the subtleties on the the rest of the board, then the tradeoff in specificity does become important. But I don't think anyone's doing that with the color jumble that is the Hydrodynamics mat.

                The other thing I wanted to point out, for the benefit of rookie teams, is that many of the recommendations on these boards is based on biased information. Multiple posts here and elsewhere on the web cite an optimal mat-sensor distance of 8-16mm. You and I have found different values than this.

                Too often teams will discard perfectly good tools such as the gyro, jig, color sensor, touch sensor because they followed suggestions on this board. If something doesn't work, I encourage the kids to figure out why it didn't work, and not assume it failed for this-or-that reason. We've been able to use the gyro quite successfully, contrary to many of the reports on this forum.

                Also, DON'T ALWAYS BLAME PROGRAMMING. For our team, other issues (EV3 software bugs, EV3 brick issues, gyro issues) were the cause of problems much of the time. Perseverance and a dose of healthy skepticism are key.

                Comment


                • #9
                  Originally posted by mageus View Post
                  There is enough contrast between the white/black and rest of the board that one can choose the highest sensitivity (the sweet spot in standoff height), and then have a wide enough threshold to exclude other parts of the board. Especially since the white lines have this sparkly texture that increases their reflectance. We found 20% intensity threshold effectively excludes everything else on the board.
                  I wish this were true, but it isn't what we are seeing. We scanned swaths of the mat where the robot drives and found several areas where light sensor reading are very close to our black or white values. The mission works great if you drive here, but if you are 1" to the left it thinks that mark is a line. There are many spots of the mat other than black lines that read less than 20% on a calibrated light sensor. And then you have variation caused by standoff. If your robot's wheel clips a wall or field model, even a tiny bit, it can change intensity values by a lot.

                  This doesn't mean the sensor is useless, it just means you should take care what you are looking at. A common way to avoid false positives is ignore the sensor until you get close to the target. Use rotation sensor feedback to get you roughly in position and the light sensor to find the line and exactly position the robot. You can call this "sensor fusion" when you talk to the judges and it is very much in vogue for controlling autonomous vehicles. A self driving car will use odometry, GPS, laser, radar and vision information to make deriving decisions. Your robot can use odometry (rotation sensors and gyros), vision and sonar!
                  Last edited by Dean Hystad; 11-20-2017, 03:37 PM.

                  Comment


                  • #10
                    Originally posted by mageus View Post
                    Also, DON'T ALWAYS BLAME PROGRAMMING. For our team, other issues (EV3 software bugs, EV3 brick issues, gyro issues) were the cause of problems much of the time. Perseverance and a dose of healthy skepticism are key.
                    "Your results may vary" is not quite strong enough to describe the black cloud you've been living under. Unlike your weird battery problem most problems are created internally by the team. Most bugs are design. A lot of solutions just won't work because they are based on a false premise. The most popular of which is "I can drive all the way across the board, make several turns, and flip that little lever with a stick." The second most popular is "That driving strategy I had will work if I make a really good jig." These are still widespread fallacies that I saw repeated many times at our tournament this weekend. Teams that figure out odometry is only good for about 16" bump into a whole new batch of bad solutions. "I can drive all the way across the mat and the only thing that looks dark is that black line." or "The gyro fixes all my odometry problems and I can depend on gyro assisted odometry for everything." fall into this camp. Eventually you figure out why the light sensor wasn't working, or the gyro wasn't working, and unless you are on Magnus' team the problem is probably going to be because you were doing something dumb. Most likely your strategy was flawed. Less often (but still common) you made an error translating your strategy into a robot and program. Very rarely is there a bug in the programming software or robot hardware.

                    But once every 7th blue moon you have a bent battery clip.
                    Last edited by Dean Hystad; 11-20-2017, 03:41 PM.

                    Comment


                    • #11
                      Dean,
                      I see what you're saying. I think you and I are saying similar things. "variety is the spice of life" truly applies to the FLL board. You have to use whatever works for that part of the task or for that part of the board. If something doesn't work, try something different. Mix and match modalities. Don't get stuck in a rut.

                      However, I've spoken to people who say "our gyro didn't work, and we talked to other teams who say don't use the gyro, so we gave up." As a newbie coach visiting this board with fresh eyes, I feel obligated to point out there's a lot of that going on here too. People don't have the time to do experiments like you do, so they just follow the advice given here blindly. And if someone writes "don't use that, it doesn't work", people will believe it.

                      Comment


                      • #12
                        Originally posted by mageus View Post
                        People don't have the time to do experiments like you do
                        When I can convince a team to run experiments they end up saving time because they have data to help make decisions that don't lead to dead ends a month later. A simple "odometry is garbage" experiment might take an entire meeting, but I see teams in their third season of FLL that haven't learned that lesson. All teams are performing experiments all the time. You can be organized and structure your experiments to have a clear hypothesis and and ordered process, or you can randomly hack away, trying stuff until you find something that works. Either way you eventually come to the same conclusions, but you get there a lot quicker if you waste some time doing organized experiments.

                        My daughter's team learned to stop asking me questions because I would always respond "I don't know, maybe we can perform an experiment to find out!". The girls tired of all the experiments and depended on each other for answers. My greatest accomplishment as a coach.
                        Last edited by Dean Hystad; 11-21-2017, 02:02 PM.

                        Comment

                        Working...
                        X