Announcement

Collapse
No announcement yet.

Programmingexamples for every challenge in FLL 2018?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • nehashah
    replied
    I mean, do you really care if they ran that block for 1.2 or 1.3 revolutions??? Of course not. Safety Glasses and Eye protection Equipment What you should really be looking at is how did they make sure the robot was exactly where it needed to be at that time?

    Leave a comment:


  • SkipMorrow
    replied
    I'm going to start a new thread on testing, reliability, troubleshooting, consistency, and practicing. Trying to keep this thread more on point for the OP's questions.

    Leave a comment:


  • Dean Hystad
    replied
    Originally posted by [email protected] View Post
    [*]Completely dismantle the robot and redesign it from scratch.
    I'm not a big stickler on teams building their own robot from scratch. I was, but as I work with more and more teams I've grown soft. If you are a team that meets 10 hours a week or has 16 weeks between your first meeting and your tournament I think it is a really good idea. If you meet 4 hours a week and have 9 weeks from start to tournament I am perfectly happy with the educator robot or a robot design you found online. Modify it a bit here and there to make it your own, but don't waste half your schedule trying to build a robot. If you have the parts I would build a robot for development and use what I learn to build a robot for the tournament. If you can do this in parallel, great! A lot of teams do this in serial; use educator robot for season one with some changes, build a custom robot for year two.

    Leave a comment:


  • Tim Carey
    replied
    Originally posted by julnil View Post

    The team participated in their first FLL this year, and was really unprepared, since the teachers did not know much about this either.
    I got into FLL as a teacher with no experience either. Our first year was a disaster, but the kids had fun and learned a little. Each year, I've learned a little more (some of which I've shared with the kids) and the kids and I have grown as we practiced through the off season. Some of the things I've learned from here have been so valuable, like Dean sharing about how to manipulate the mission testing to make programs more robust and reliable. Other things I have learned through playing around on my own, making lots of mistakes and trying new things. I think the most important thing is not to judge yourself too hardly or let the kids get discouraged. Enjoy the process without worrying about trophies and awards.

    Leave a comment:


  • brian@kidbrothers.net
    replied
    Anyway, for julnil, my advice for you and your team on how to learn from your first FLL season where your team "was really unprepared, since the teachers did not know much about this either":
    • Make the EV3 robot using the exact design that comes with the kit. This one:


    ev3-robot-educator-driving-base-14cc8584a6ff24db91f33f0e9c1f2e99.jpg?fit=around|413:234&crop=413:234;*,*.jpg
    • Then, take all the models off your Into Orbit board so all you have is the plastic mat.
    • Create some extremely simple programs using only move tank blocks and spend a while getting a sense for how the robot moves around.
    • Take some of those move tank blocks and put them in a loop to get a sense for the limitations of how the robot moves. For example, have it go straight a few feet, then pause for a second, then back it up the same distance, then pause for a second. Loop that ten times and see how close or how far it gets to returning to the same spot. Or, similarly, have it turn what you think is a 90 degree turn. Loop it 20 times to just turn 90 degrees, pause for a second, and see how close or how far it gets to returning to the same heading.
    • Get some experience moving the robot around the table and see if you can consistently move it from the same spot in base to the same spot out on the board.
    • Once you've got a good feel for how the robot behaves, put the models back in place and try some of the missions closest to base. This year, space travel, solar panel, & tube module were pretty much straight shots from base. But they'd also benefit from some "attachment" design to create a robot arm that would accomplish them more easily.
    After you've got a few "wins" where you can feel the satisfaction of completing missions without really knowing much about what you're doing, then you might feel the temptation to try all the missions this way. (After all, it's not too hard to find Mr. Hino's youtube video where he accomplishes all the Into Orbit missions using pretty much this approach.) But this is where your real opportunity for learning comes in. At this point, I'd do three things:
    • Discuss why you were able to accomplish those missions with a very simple robot design, very simple attachment design, and very simple programming. Break it down step by step -- where the robot is, where it's going, how it gets there, how the robot interacts with the model, what problems it might encounter, etc. etc. etc.
    • Learn how to use some sensors.
    • Completely dismantle the robot and redesign it from scratch.
    Then tackle some more missions, trying to learn as much as you can about what works and what doesn't, and most importantly, **why** it didn't work and how the kids might improve.
    Last edited by [email protected]; 12-27-2018, 09:51 PM.

    Leave a comment:


  • brian@kidbrothers.net
    replied
    Originally posted by Dean Hystad View Post

    Now julnil has heard for a design judge with 19 years of experience who has coached or mentored dozens of teams that running the same mission over and over without changing anything teaches you very little and that it is better to make the robot adapt to variation than it is to try to control variation. I even tossed in my philosophy of starting to develop missions out by the mission model and working backward toward base. I've seen teams in their 4th season of FLL that don't know those lessons. I think that kind of thing is useful.
    Any professional concert pianist can point out how virtually **everything** a beginning student is doing is wrong. But it's not terribly helpful for them to do that.

    In this case, I agree with everything you said in the paragraph about "testing." In fact, one of our best moments from the Into Orbit season involved exactly the same example you used. The kids were trying to run the extraction mission, and their solution was to create a rectangular 'box' that was about the same shape as the extraction model, except it was about a quarter inch larger on each side than the model. They'd try to drop the inverted box over the extraction core samples, then back up the robot to both pull the discs off the axle for 16 points and then return them to base for the bonus 10 points, plus the blue disc to drop on the food production mission for another 8. Because it was 34 points (they never got to the 3d printer), they put in a lot of effort. But because the box fit so tightly around the extraction model, it would often fail because the robot just didn't have the precision to get within a quarter inch of the right spot. Finally, one kid says, "Why don't we make the box wider?" A different kid actually grabbed the technics pieces to make it wider. They didn't even change the program, but I don't think it ever missed after then, and they scored it all 4 matches during the tournament.

    Perfect example of what you're talking about, right? With the wider box, the starting point of the robot doesn't matter too much. The kids could pretty much eyeball the starting point and still get it done. And making the box wider was a pretty obvious solution (in hindsight, of course). However, it was no accident that the kid who thought of the idea was in his 4th year -- 2 years of FLL Jr., 2 years of FLL. That same kid has made a bunch of mistakes over 4 years and has tried a bunch of sub-optimal strategies over 4 years. But he has learned. Me (or anybody else) telling him in year 1 or 2 or 3 that he was doing a bunch of sub-optimal things wouldn't have been very helpful.
    Last edited by [email protected]; 12-27-2018, 11:02 PM.

    Leave a comment:


  • Dean Hystad
    replied
    Originally posted by [email protected] View Post

    I don't see how that's remotely helpful for julnil. She clearly said: "The team participated in their first FLL this year, and was really unprepared, since the teachers did not know much about this either." She's starting from scratch. How is she supposed to "simulate conditions that may occur during a match"? So then you go on to say that "the most likely variation is that the robot is not always positioned the same way for the start of the mission." One obvious solution to that is some sort of jig, but later on you say "in general I don't like jigs because they get used the wrong way." Then you go on to suggest "the time to use a jig is after you have a reliable mission." But at this point, you've completely changed the subject from where julnil started, and you haven't provided anything at all that's helpful for her.

    So, regardless of whether your posts make people "really angry" or whatever, what do you suggest she can actually do as a mom who wants to help her team for next year?
    First off, julnil is no longer a rookie, so this is not "starting from scratch". There is already one season of experience to show that what the team was doing didn't work very well. I'm guessing there was a lot of aiming and hand positioning of attachments and missions that rarely succeeded. I bet there was a lot of running the same program over and over and making little changes to where the robot was positioned in base. I'm also guessing this was done in a haphazard manner with no plan, no process, and nobody recording any results. In other words the way most rookie FLL teams do things their first season. That is what rookie seasons are for. You hopefully learn that just driving out and poking things doesn't work. You see some teams that have really impressive robots, and you see other teams that have simple robots but still score a lot of points. You see teams doing strange things like backing into walls or stopping over the lines on the mat. You see that a lot of teams are using sensors. You learn a lot during your rookie season.

    After your rookie season you know enough to ask questions. When you ask questions you get a lot of advice. You are still mostly rookie, so sifting through the advice is as difficult as designing missions. I think a lot of FLL advice is bad. I don't think it is intentionally bad, or that the people that pass it on are bad. I think that a lot of the bad advice even works to a degree. Using a jig is a lot better than aiming by hand. Using matched motors is better than using motors with different characteristics (who has enough motors to do this????). Using a check list will give better results than no documentation on how to run missions. All of these are good things, but if you are a rookie coach, or even a lot of experienced coaches, you might use them for the wrong reason.

    A jig can make a mission that fails most of the time start working half the time or even most of the time. A coach will look at that and say that jigs are great. If we make a better jig maybe the mission will work all the time. It is more likely that the mission has some flaws that are hidden by the jig. Fixing the flaws may eliminate the need for the jig and will certainly result in missions that are more reliable. So jigs aren't bad, but they can have bad side effects. You may never learn how to make missions that are really reliable because you were able to limp along using a jig.

    So Skip offers up a bit of advice about testing. Teams should test their missions, no refuting that. The problem with the advice is you don't learn anything from a mission that works.
    Running the same mission starting from the same place over and over only tells you that the mission works when there aren't any changes or problems. Anyone who's done FLL knows that this is not how things work. Things change when you go from one table to another. Things change day to day. Some things change minute by minute. If everything could be controlled to always be the same there would be no reason for testing. We don't have that kind of control, so our testing has to introduce changes and see how the robot responds. A good mission test will have a plan. "We are going to adjust the starting position east and west until the mission starts to fail all the time. We will carefully control the starting position so we know how far it is from the ideal. We will record each time it fails and where it fails." After you run the test you can analyze the data. "The mission was pretty reliable until we moved the starting position 1" to the East or 1/2" to the West. The mission failed because it bumped into the space station." The analysis should lead to some conclusion "Driving near the space station is the weakest part of this mission. Can we use another route? Can we use the line by the Escape Velocity model?" You would modify the mission based on the conclusion and test again. After the test you may decide that the mission works well enough "The mission works pretty well if the robot position is off less than 1/2". If we move the starting position a little bit to the East I think it will work almost every time, especially if we work that position into our starting jig.

    I like doing both development and testing at the same time, and I start out by the model. Why write a program to drive from base out to a model if the attachment doesn't work when you start only 6" away? If I wanted to solve the extraction mission I would design an attachment and write a little program that starts right next to the extraction model, moves the attachment and pulls the samples off the axle. If this worked a couple times I might move the starting spot (by the model) a little bit North or East and see how far the robot can be off and have the mission still work. If the "variability envelope" is small I might redesign the attachment so it works over a larger area or I might think about having the robot bump into the model or use a line by the model. After I get things working with the robot starting out by the model I would work my way back toward base. For something close like extraction my next step might be starting from base. For catching the lander I might pick some via points that are easy to get to and identify. Bumping into the aerobic exercise model while driving against the North wall is a good way to know where you are. Or maybe I would have better luck seeing the moon while driving North along the East wall. But the process is pretty much the same at each via point until I get all the way back to base and have a nice reliable mission that is insensitive to all kinds of little bumps and misalignments.

    Now julnil has heard for a design judge with 19 years of experience who has coached or mentored dozens of teams that running the same mission over and over without changing anything teaches you very little and that it is better to make the robot adapt to variation than it is to try to control variation. I even tossed in my philosophy of starting to develop missions out by the mission model and working backward toward base. I've seen teams in their 4th season of FLL that don't know those lessons. I think that kind of thing is useful.
    Last edited by Dean Hystad; 12-27-2018, 07:02 PM.

    Leave a comment:


  • brian@kidbrothers.net
    replied
    Originally posted by Dean Hystad View Post
    Doing the same thing over and over is practice, not testing, and it is a waste of time. The 10 for 10 test proves nothing other than the mission works on your table with your skilled operators and your controlled setup. It doesn't matter if your robot does the same thing each time it is run. What matters is that it succeeds.

    After you can run a mission twice it is ready for testing. For testing you simulate conditions that may occur during a match. The most likely variation is that the robot is not always positioned the same way for the start of a mission. What happens if it is a bit North or South? How far off can it be and still work? If you can move the start position by an inch and it still works, that is a far better indicator that it can handle running on a different table than having it work 10 for 10. After changing starting position try playing with starting heading, or the position of attachment arms. Try shifting the mat so it isn't centered or is crooked. If you follow walls tape a coin to the wall to simulate a knot. How does the robot handle that? If you are using light sensors try running in the dark or with a really bright light. Make up tests for each thing that might happen at a tournament and do your best to make a robot/solution that can adapt and overcome.

    You will never be able to make missions that work all the time on every table, but you can make missions that adapt to reasonable amounts of variation. The better your robot adapts, the more likely it will succeed. Autonomous robots do not succeed by doing the same thing over and over. They have to adapt to changes in the environment. FLL robots are supposed to be autonomous.
    I don't see how that's remotely helpful for julnil. She clearly said: "The team participated in their first FLL this year, and was really unprepared, since the teachers did not know much about this either." She's starting from scratch. How is she supposed to "simulate conditions that may occur during a match"? So then you go on to say that "the most likely variation is that the robot is not always positioned the same way for the start of the mission." One obvious solution to that is some sort of jig, but later on you say "in general I don't like jigs because they get used the wrong way." Then you go on to suggest "the time to use a jig is after you have a reliable mission." But at this point, you've completely changed the subject from where julnil started, and you haven't provided anything at all that's helpful for her.

    So, regardless of whether your posts make people "really angry" or whatever, what do you suggest she can actually do as a mom who wants to help her team for next year?

    Leave a comment:


  • Dean Hystad
    replied
    Originally posted by philso View Post

    Just running a mission some number of times is not useful as Dean has expounded on at length. Running missions and analyzing what goes wrong is what will improve the probability that is will work over a variety of conditions. One can also extrapolate to estimate what variations one should accommodate. It is also useful to introduce some variations to simulate what can be seen in the real world. Use the success rate to evaluate the effectiveness of the changes to the solution.

    Aim to design solutions that are tolerant of variations but don't pass up opportunities to minimize the variations that can occur. It takes some judgement and experience to decide what variations are reasonable and what variations are unlikely. If the solution can mitigate a sufficient amount of variability, the solution can work on multiple tables with a high probability of success.
    Every time I am with a group of coaches I hear advice about using jigs. In general I don't like jigs much because they get used the wrong way. A jig should never be a crutch that you depend on to make a mission work, because when jigs are used that way they don't work. Often the jig reduced the mission variables just enough that it could work pretty reliably at home. This may hide the fact that your mission is not very robust. Because the mission worked all the time at home under carefully controlled conditions you go to the tournament with great confidence until the first run where that super reliable mission doesn't work any more.

    The time to use a jig is after you have a reliable mission and you want to make it more reliable by starting the robot in the middle of your "variation envelope" or because you want to save time positioning your robot in base. Reserving jig use for missions that don't need a jig sounds goofy, but it the kind of goofy thinking that leads to really good solutions.

    Occasionally coaches hunt me down (like a dog) at tournaments to talk about how their team is doing and what they can do to be better. A significant number of these conversations start out with "When I first started reading your posts they made me really angry." This would make me sad except it is usually followed by "Once we began to understand about reliability and adaptability and what we could do to limit ways the robot can fail the kids started enjoying programming a lot more and changed how they designed missions and we are having a lot more fun than before." As long as I keep hearing "having a lot more fun than before" you can count on hearing my goofy opinions.
    Last edited by Dean Hystad; 12-27-2018, 03:30 PM.

    Leave a comment:


  • Dean Hystad
    replied
    Originally posted by philso View Post

    Have you ever watched NASCAR or F1 pit stops?
    Enough to know the pit crew doesn't pick up the car and carefully aim it down the track.

    I have yet to get a pit pass, but some of my co-workers have. I was in Maranello Italy at the Ferrari wind tunnel during a formula 1 race. They had a bunch of screens showing what was happening on the track and graphs showing some of the data being collected by the car. It was impressive even though I understood very little of it. When I was younger I used to pit for my uncle's dirt track modified stock car. We didn't have screens and data streaming from the car (or a wind tunnel).
    Last edited by Dean Hystad; 12-27-2018, 02:59 PM.

    Leave a comment:


  • philso
    replied
    Originally posted by Dean Hystad View Post
    When you run your 10/10 test you are testing for variability. If all conditions are the same every time the robot will run a program exactly the same way every time and it will succeed every time. Of course all conditions are never the same run to run so every run has some variability. If you are able to run the mission 10 times and it works 10 times that means you have not exceeded your "variability envelope". Your mission was able to adapt to changes in starting conditions and the environment and still work. The problem with this type of testing is you end up with no understanding about what your "variability envelope" is. How was this run different than the one before or from runs on another day? Were your operators really on top of their game when they got 10/10, or were they tired or jumpy? Was it a sunny day or a cloudy day? Was it warm or cool? Was the table set up properly? It is a terrible thing to optimize a mission to the wrong conditions. There is always variation in every run, but if you try to run missions the same way every time you leave the variation up to luck. Why do that when you can control most of the variation and see how the robot responds?

    Other than leaving what you are testing up to chance, 10/10 testing is bad because it is really inefficient. I see teams run their mission 10 times (some use 5 times) and record how often it failed. They make a small mod to the mission and run 10 times to test. It may take 15 minutes to run the robot 10 times and about 2 minutes to make the small mod. You just spent 17 minutes on one small problem, and chances are you guessed wrong on what the problem was. When you test using controlled variation you know what the problems are. The mission has to start with the robot in the perfect position or the perfect angle or the A attachment has to be 1/4 inch above the table but no higher or lower.

    Finally, the biggest reason why 10/10 testing is bad is because your missions never really become robust. The more you run the robot the better you get at controlling the variables. You get better at positioning the robot in base. You get better at setting up the attachments. You get better at using the same method for starting the mission. You (the operator) get better and this lets your robot get worse. Having highly trained operators sounds like a great thing, but their skills may only work for their practice table. When judging I'll sometimes ask who does the worst job running the robot (not in those words) and then explain to the team why that person is extremely important and a big reason why their robot is working so well. If you aren't going to use controlled variation at least test missions with different operators. My girls often had me run the missions. When dad could run the robot you knew it was working really well (Dad is an unbelievably bad operator. Whether this is intentional or not is yet to be determined).
    Have you ever watched NASCAR or F1 pit stops?

    Leave a comment:


  • philso
    replied
    Originally posted by SkipMorrow View Post
    In that case, as long as you have access to the table, parts and laptop, I would simply recommend that you try to solve one or more missions. No matter what you do, you should aim for 100 percent repeatability and reliability. If your solution only works 50 percent of the time, you need to study the failure modes of your design and fix it so that you get to 100 percent. Of course, TRUE 100 percent reliability is not really possible, but that should be your goal. My team uses a 10 out of 10 test. If it works ten times in a row, then it is considered reliable enough for our needs. And that's ten out ten NO EXCUSES. If it fails once FOR ANY REASON, including incorrect placement of the robot or forgetting to reset the attachment, then you have fix it so it won't happen again, and restart the 10/10 test.

    Once you start this, you will start to learn what is reliable and what is not reliable on sight, or even upon explanation from the kids, before they even start building it. "You want to try and shoot Gerhard into the airlock from base? Hmm, do you think that will be very accurate and reliable? Can you think of a way to make it more reliable?" You may see that maybe your base robot needs improvement.

    I really believe that reliability is very important for FLL to be fun for the kids. If you let them design a solution that is only 10 percent reliable, it gets frustrating for the kids. They will continually try to change the code. Drive 0.01 revs farther this time. Drive 0.01 revs less this time. Continually chasing an impossible solution, and it gets frustrating. These aren't surgical robots and you can't expect them to drive across the table and be in a perfect position every time. There are a few things you can do to make your programs more reliable, but the vast majority of the reliability comes from the mechanical solution.

    Have fun. Ask other teams questions. Ask US questions. You are in the right place for help!
    Just running a mission some number of times is not useful as Dean has expounded on at length. Running missions and analyzing what goes wrong is what will improve the probability that is will work over a variety of conditions. One can also extrapolate to estimate what variations one should accommodate. It is also useful to introduce some variations to simulate what can be seen in the real world. Use the success rate to evaluate the effectiveness of the changes to the solution.

    Aim to design solutions that are tolerant of variations but don't pass up opportunities to minimize the variations that can occur. It takes some judgement and experience to decide what variations are reasonable and what variations are unlikely. If the solution can mitigate a sufficient amount of variability, the solution can work on multiple tables with a high probability of success.

    Leave a comment:


  • Dean Hystad
    replied
    When you run your 10/10 test you are testing for variability. If all conditions are the same every time the robot will run a program exactly the same way every time and it will succeed every time. Of course all conditions are never the same run to run so every run has some variability. If you are able to run the mission 10 times and it works 10 times that means you have not exceeded your "variability envelope". Your mission was able to adapt to changes in starting conditions and the environment and still work. The problem with this type of testing is you end up with no understanding about what your "variability envelope" is. How was this run different than the one before or from runs on another day? Were your operators really on top of their game when they got 10/10, or were they tired or jumpy? Was it a sunny day or a cloudy day? Was it warm or cool? Was the table set up properly? It is a terrible thing to optimize a mission to the wrong conditions. There is always variation in every run, but if you try to run missions the same way every time you leave the variation up to luck. Why do that when you can control most of the variation and see how the robot responds?

    Other than leaving what you are testing up to chance, 10/10 testing is bad because it is really inefficient. I see teams run their mission 10 times (some use 5 times) and record how often it failed. They make a small mod to the mission and run 10 times to test. It may take 15 minutes to run the robot 10 times and about 2 minutes to make the small mod. You just spent 17 minutes on one small problem, and chances are you guessed wrong on what the problem was. When you test using controlled variation you know what the problems are. The mission has to start with the robot in the perfect position or the perfect angle or the A attachment has to be 1/4 inch above the table but no higher or lower.

    Finally, the biggest reason why 10/10 testing is bad is because your missions never really become robust. The more you run the robot the better you get at controlling the variables. You get better at positioning the robot in base. You get better at setting up the attachments. You get better at using the same method for starting the mission. You (the operator) get better and this lets your robot get worse. Having highly trained operators sounds like a great thing, but their skills may only work for their practice table. When judging I'll sometimes ask who does the worst job running the robot (not in those words) and then explain to the team why that person is extremely important and a big reason why their robot is working so well. If you aren't going to use controlled variation at least test missions with different operators. My girls often had me run the missions. When dad could run the robot you knew it was working really well (Dad is an unbelievably bad operator. Whether this is intentional or not is yet to be determined).
    Last edited by Dean Hystad; 12-27-2018, 12:47 PM.

    Leave a comment:


  • SkipMorrow
    replied
    True. But we have easy access to one table and we have one set of operators. So we do the best we can with that. Occasionally we have other drivers step in, but normally the team selects the drivers and we stick with that. Does that set up a single point of failure? Of course. So far we have been lucky. Some day it will bite us.

    When we get access to another table, there are always missions that suddenly don't work. Fortunately, normally it's a small tweak here or there and the mission seems reliable again. Often it identifies a new failure mode that we just hadn't seen before, and the mission becomes even more robust. Sometimes it requires some building, but not very often at all.

    And then it's tournament day. Get to the practice table and bam! All of our missions work! Or none of them! We get it fixed or we rest on our laurels. Then we get to our practice round and then it is something else. And then to our first official round which is on yet another table and by now we are getting closer to having all of the missions working again. Or not. Oh, this table has tape on the wall here. Or a nail head in a critical spot.

    It's all a part of the game. You test, make it as reliable as you can. Try to think of failure modes that haven't bit you yet, and overlook some others. You find a new way to test. New table perhaps? Put a book under one leg of the table? Move the mat slightly north/east/west? Different lighting? You do what you can, and have fun! It's all a part of the effort.

    The 10/10 test is only one part of our quality control. It's better than a 1/1 test for sure! I don't think of the 10/10 test as practice, but I know it is. The human drivers are a part of the reliability. Every season I have kids that don't want to put hard stops on their attachments. They think they can manually set it exactly where it needs to be each time. They find out quickly that isn't the case quite often. The 10/10 test will usually make it apparent.

    Leave a comment:


  • Dean Hystad
    replied
    Doing the same thing over and over is practice, not testing, and it is a waste of time. The 10 for 10 test proves nothing other than the mission works on your table with your skilled operators and your controlled setup. It doesn't matter if your robot does the same thing each time it is run. What matters is that it succeeds.

    After you can run a mission twice it is ready for testing. For testing you simulate conditions that may occur during a match. The most likely variation is that the robot is not always positioned the same way for the start of a mission. What happens if it is a bit North or South? How far off can it be and still work? If you can move the start position by an inch and it still works, that is a far better indicator that it can handle running on a different table than having it work 10 for 10. After changing starting position try playing with starting heading, or the position of attachment arms. Try shifting the mat so it isn't centered or is crooked. If you follow walls tape a coin to the wall to simulate a knot. How does the robot handle that? If you are using light sensors try running in the dark or with a really bright light. Make up tests for each thing that might happen at a tournament and do your best to make a robot/solution that can adapt and overcome.

    You will never be able to make missions that work all the time on every table, but you can make missions that adapt to reasonable amounts of variation. The better your robot adapts, the more likely it will succeed. Autonomous robots do not succeed by doing the same thing over and over. They have to adapt to changes in the environment. FLL robots are supposed to be autonomous.
    Last edited by Dean Hystad; 12-27-2018, 04:09 PM.

    Leave a comment:

Working...
X