Announcement

Collapse
No announcement yet.

Testing, reliability, troubleshooting, consistency, robustness, and practicing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • SkipMorrow
    replied
    I think we have pretty similar development cycles. By "development", I mean the entire solution development. Not just software development.

    Step 1. Build what will interact with the mission model. Whatever lego pieces are going to manipulate the mission model, build it and hold it with your hand and see if it can do the job. Sometimes it's a passive thing that won't need to articulate or move in any way. It'll just be attached to the robot. An example of this was the thing we used for transportation to lift the ramp. We just used a bent beam that stuck out of the front of the robot. In that case, the builder would hold the piece rigidly in their hard and try to push it under the ramp to see if it would lift. Other times there could be a variety of moving pieces that would need to move correctly for the mission to work. The bottom line is you want to try and "move like a robot". If you have to use your eyes and feel how the mission model is reacting and compensate somehow, the robot isn't going to be able to do that. Your attachment isn't going to work. We often try our attachments like this with our eyes closed. We also try to see what happens if the robot's approach is off by a little bit. When we have an attachment that is reliable and robust enough, then we go to step two.

    Step 2. Attach the interaction pieces to the robot. Our team has a "base attachment" that fits perfectly on our base robot, leaving power take off points for our medium motors. We usually have a couple of these base attachments lying around ready for someone to use. You would then attach the interaction pieces to the base attachment. Perhaps you are trying to work with someone else such that more than one mission can be accomplished on a single run from base. In that case, you would have to work together to make sure all interaction pieces can live in harmony on the base attachment. Once you get everything attached, you then put the robot in the optimum position to interact with the mission model and test it. We generally don't write any code but instead use the brick menus and use the built in motor control to operate the attachments. Sometimes we have to write very short code for this test if we need specific motor speed or timing. We definitely are not working on driving out there yet. Although, driving to the model is a consideration, and has to be taken into account for even step one. If your attachment idea is too big or won't allow you to get to where you need to be, then that won't work. Often times this step will reveal a weakness in your design and you will kind of hover between steps 1 & 2 while you fine tune your solution.

    Step 3. Program the robot to drive to the mission and operate the attachment as designed. Test and make adjustments to the program. Ideally you should not have to do much building at this point, but it does happen sometimes that you need to go back to step one. Once the mission is working say, two or three times, then you go to the 10/10 reliability test. If we get 10/10 working, then we do some robustness tests. Perhaps we try different lighting. Vary the start position a little. What happens if you forget to reset your attachment? Etc. I can't say that we have a list of robustness tests that we use for every mission, but we try to do some testing for each mission. We don't go for 10/10 with each robustness test. This is more to get a good feel for when the robot requires more accurate external conditions.

    These steps are not formal steps where a kid can say "I am on step 2 and plan on starting step 3 by the end of practice". It's more of a mindset and an understanding.

    Leave a comment:


  • Dean Hystad
    replied
    Running missions vs Testing, Continued (because I am too long winded to fit inside a single post).

    As Skip says, there is lots of different types of testing, and different types of tests are good for different things. When my kids were swinging their "hammer" by hand, and some of the tests where they used the EV3 motor control, that is what Skip referres to as "Testing"., something you do when trying something new. Even at this level I think it is good to have a plan and to include some of what Skip calls "Robustness" testing. When you are first trying ideas it is a great time to adjust the parameters and see how these adjustments affect results. My kids didn't know that striking the pad near the outer edge made the rocket launch faster, and that if you strike the lever near the pivot the rocket will not achieve escape velocity no matter how hard you swing the hammer. I refer to this time of testing as "Experimentation" because it's main goal is to create new knowledge.

    From what I can tell "Trouble Shooting" is just "Reliability" testing done by accident. You run your mission and something unexpected happens. You run the mission again and again until you isolate where the actual behavior diverges from the expected behavior. Then you do a little problem solving to design a solution to the new problem and probably do some more "Reliability" testing to see how well the solution works. I think rookie teams do a lot of "Trouble Shooting", mostly because they don't yet know enough to predict might affect the success of the mission. You write a mission, run it 3 times and it fails on the fourth run. Now you either run it again and if successful decide that 4 of 5 is good enough, or you start running the mission with the intention of causing the failure to repeat. You have now transition into reliability testing.

    Skip is correct that being Robust has little to do with being Consistent. The most reliable robots may never perform a mission the exact same way two times in a row, whereas a mission that is really consistent may only be consistent in your basement, and isn't robust at all. Most FLL teams strive for consistency. It is very pleasing to have the robot perform in a perfectly predictable manner. I think this affection results from us thinking that robots and computers are precise. While it is true that robots and computers are precise, not much else in FLL are. Operators are not precise. Tables are not precise. Lighting is not precise. Power is not precise. Traction is not precise. The robot may perform exactly the same actions every time and have different results every time. When you can accept this is true, and embrace sloppy robustness over precise consistency, that is when your missions will start to improve.

    Leave a comment:


  • Dean Hystad
    replied
    The escape velocity is a great example of why 10 for 10 testing is not an efficient use of time. Here is what one of my teams did for testing escape velocity.

    First we discussed what parameters are important in this mission. The team quickly decided on:
    1. How hard you hit the paddle
    2. Hitting the paddle, not the pivot or the arm

    Initial testing for both was performed without a robot just to verify that the assumptions were correct. The team built a hammer out of a motorcycle wheel and a 15L beam. They struck the paddle with the hammer using various amounts of force to get an idea of how much was needed. Next a slight modification was made to the hammer so it could be attached to the robot and they repeated the tests using the EV3 motor control. The robot could not hit the paddle hard enough to achieve escape velocity.

    The team discussed ideas for hitting harder.
    1. Use gears to increase speed
    2. Use gears to increase force.
    3. Make the hammer heavier.
    4. Make the hammer longer.
    5. Make the hammer shorter.

    Options 3, 4 and 5 were tested by hand. Making the hammer shorter did not look promising, but making the hammer heavier or longer did. Minor modifications were made so the longer and heavier hammers could be attached to the robot and tested using the EV3 motor control. Both worked well, but the team preferred the heavier hammer. Testing with gears was more difficult so options 1 and 2 were dropped in favor of option 3. 10/10 testing is not very useful at this time because variability is inherent to this type of manual/hand testing. Repetition is still required. A single test teaches nothing. Two similar results is a trend. Three similar results is interesting.

    During the heavy/long hammer testing it became obvious that hitting the paddle near the outer edge produced the best results. Using the EV3 motor control they tested moving the robot around North/South then East/West. They also experimented with striking the paddle with the robot North of the paddle and with the robot West of the paddle. They determined that it didn't matter at all if the robot was north or west of the paddle. They determined that the East/West alignment was forgiving (+/- 1" worked fine), but the North/South alignment was critical (outside +/-1/2" and the mission started to fail). 10/10 testing is still overkill. We are looking for trends, so three tests at each position is plenty of information.

    Now the team had a potential solution and a lot of information about how the robot hand to activate the model to score points. Up to now not a single line of code was written. Not a second of time was wasted driving the robot from base out to the model. The team had the information they needed to write a program that drove out to the escape velocity model and more importantly the knowledge to successfully test their solution. First they tried aiming the robot by eye. They programmed the robot to drive to the model and wait for a button press. They ran the program, recorded the position of the robot, pressed the button to activate the model, and recorded the results. By eye they could put the robot in the correct East/West position almost every time, but the North/South positioning was really hard to achieve.

    Ideas were discussed on how to get better North/South placement.
    1. Drive out, turn and back into the south wall, drive out and turn to the striking position
    2. Use a jig to position the robot in base
    3. Wall follow using a wheel against the south wall.

    Three sub-teams were formed to develop each solution. All solutions were tested 10 times (there is a time and place for all kinds of tests), and the results were used to pick the best solution.

    The jig was the least successful. This was better than aiming by eye, but it only succeeded 3/10.
    Using the wall to square and using a wall following wheel both worked 9 of 10 times. On one of the failures the rocket shot over the top and bounced back down, the other failure looked like everything was good, but the rocked didn't make it around the bend at the top of the model and feel back down.

    The team chose to square to the south wall because it didn't require and extra attachments to follow the wall.

    Now we began testing the mission. We discussed what might change from run to run.
    1. Robot starts in the wrong position in base.
    2, Hammer starts in the wrong position in base.
    3. Mat is in the wrong position.

    For option 1 the team started the robot in different positions in base, recording the position and if the mission was successful or not. Because the robot backed into the South wall the mission was completely insensitive to North/South positioning errors. Place it against the wall or almost out of base didn't matter. The robot was more sensitive to East/West positioning errors and we saw the original +/-1" error envelope form our earlier tests. The team modified the mission to use the light sensor to find the black line by the model. This fixed the East/West error sensitivity.

    For option 2 the team modified the mission to raise the hammer against a hard stop. Doing that it doesn't matter if the hammer is in the up or down position when the mission starts.

    For option 3 we moved the mat around East/West and North/South and ran the robot, recording the East and South gaps around the mat and the success or failure. The mission was completely insensitive to moving the mat East or West. The mission failed if the South gap exceeded 1/2". The team decided to adjust the mission slightly to hit further out on the paddle (about 1/4") and this resulted in a mission that works if the mat is tucked under the South wall as much as 1/4" or a gab as wide as 3/4". It was decided that this was probably more variation than we would ever see at the tournament.

    I can't say how long this process took as it was spread out over a period of 2 months. Initial tests with hand held prototypes started very early. Testing multiple solutions to pick the best happened mid October. Mission testing started Mid November

    Re-reading this I make it sound like the process was really organized and pre-planned, and that the team was very accomplished, neither impression is true. I only work with rookie or maybe second year teams these days. These kids were smart, but I think all FLL kids I talk to are smart. Starting out they didn't know how to solve the mission or what kind of things were important to control and which things could mostly be ignored. Much of that was learned during manual testing. If you cannot launch the rocket using a tiny weight on any length stick chances are the robot can't either. Using the EV3 Motor control quickly exposed that some hammers we built were too heavy to move and that some attachments were too wobbly to hit the paddle reliably. Hands on testing is better for learning than sticking the robot and programs and driving in the way of things. Plus it is fun to whack the escape velocity model and have an excuse for doing so. I will say that once we started testing the reliability of the mission that the kids were pretty impressive. The earlier testing taught them what to look for. When I asked what kinds of things we should test for they already knew that North/South placement was critical.
    Last edited by Dean Hystad; 01-02-2019, 03:51 PM.

    Leave a comment:


  • Testing, reliability, troubleshooting, consistency, robustness, and practicing

    Sorry for the repost. First one got flagged for spam when I corrected a typo. Grrr.... timdavid , your reply was lost too

    Many of us have our opinions about testing. Some of us (me!) like the "ten times" approach. These are my thoughts and personal approach.

    1. Kid creates what he/she thinks will be a good solution for a mission.
    2. Kid runs possible solution and it fails dramatically. Go back to step 1.
    3. Eventually possible solution works. That's one out of ten.
    4. Run it again. Did it work? If you can get ten successful runs in a row, then you are probably close to being competition-ready.
    5. If it only works, say three times before it fails again, then it is time to analyze the failure. Is there anything that can be done to fix it so that failure mode never happens again? If so, make the changes and go back to step 1.
    6. Sometimes random happens, and there seems to be no solution to make the mission more reliable. Escape velocity was one such mission for us this year. We came very close to 10/10 reliability, but sometimes the spaceship would still fall down. The team discussed having some mechanism to help hold the spaceship up, but they were concerned about it not being in accordance with the rules. Ultimately, the team decided the solution "was reliable enough", even though we could rarely get the 10/10 test to pass.

    Terminology:
    * "Testing" is what you are doing when you try something new. It's probably the first 1/10 and 2/10 runs. Does this solution "work"?
    * "Reliability" and "consistency" frequently are interchanged. Many times they can be, but there are some differences. If you think about it, "consistent" doesn't say anything about how successful something is. You can consistently miss every time, and be very consistent. "Reliability" has a goal inference. You are reliable at work if your boss can usually count on you to deliver a good product. A robot is reliable if it can solve a mission 10/10 times. A robot that solves a mission 9/10 is less reliable. Reliability is the quality of being reliable, dependable or trustworthy. We all want robots that are reliable. Document your reliability tests.
    * "Robustness" is the quality of a system to continue to be reliable, even when inputs are altered. NASA frequently won't launch rockets when the winds are too high. The launch systems have been tested for reliability up to certain wind speeds. Beyond that level, the reliability is uncertain. Similarly, if the robot can be started two cm north or south of the optimum start location and still solve the mission reliably, then you know something about the robustness of your robot. Some inputs you may have a lot of control over (robot start position), and others you may not have any control (tilted table or lighting conditions at competition). We also want very much to have robots that are robust. Note that you can (and probably should) test for robustness. Document your robustness tests.
    * "Troubleshooting" is what you are doing when you are trying to determine the failure cause and create a workaround or better solution.
    * "Practice" is what you are doing when you have humans repeat something to become more consistent. Not necessarily more reliable. Practice DOES NOT make perfect. Perfect practice makes perfect. Practice makes consistent. Document your practice sessions.

    Like I said, these are my thoughts. This is not a primer on reliability. This comes from 20+ years of doing training in the military, and six years of maintenance and reliability testing for the navy. I'd like to see what others here like to do in the name of reliability.
Working...
X