Announcement

Collapse
No announcement yet.

Programmingexamples for every challenge in FLL 2018?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Programmingexamples for every challenge in FLL 2018?

    Is there anywhere online I can find programmingexamples for every challenge that was in FLL 2018: Into orbit?
    I like to learn from examples, and are trying to guide my team trough this years challenges, so they can learn more before next years challenge

    Thanks for all answers

  • #2
    I can see that this is your first post here. I can appreciate the desire to see how the missions can be completed. I am guessing, and maybe I am wrong, that you are/were a rookie coach. So I really want to help. You want to do better net year, and I can't fault you for that!

    To be honest, posting my team's code here wouldn't help you one bit. I know Dean Hystad hates it, but half of my team's programs are a bunch of green move blocks strung together. Some of them even have some medium motor blocks in there. Not knowing what our robot looks like, how it works, where each attachment is mounted, even the spacing between our wheels, all make the code by itself completely useless. And the code examples we have that use myblocks? Those will even be less helpful because you won't know what the myblocks are doing.

    What you really want to do is watch some of the many videos available on youtube right now. When you watch closely, you can usually make a good guess what the code is doing. I mean, do you really care if they ran that block for 1.2 or 1.3 revolutions??? Of course not. What you should really be looking at is how did they make sure the robot was exactly where it needed to be at that time? Does it look like it would be repeatable? Or does it look like they posted a video of the one time it worked out of hundreds of attempts? Look for clever ways to mechanically make sure the robot is in the right position. In particular, do they align off walls and/or mission models? Also, how quickly does the team change out attachments when the robot is in base?

    I'll post some of our code examples, but I can 100% guarantee you that they won't be even 1% helpful.
    Norfolk, Virginia, USA
    FLL Coach and Regional Tournament Head judge since 2014

    Comment


    • #3
      Generally, the programs, the mechanisms, the robot chassis and the mission strategies are all tightly integrated and are designed to work together so they must be examined as a whole. Seeing just one of the parts of this whole, say the programs, often gives very little insight into the solution. It is like taking a recipe and removing all the nouns, verbs and quantities and replacing them with random words and numbers and then trying to figure out what the recipe makes and why it works.

      A good example would be where teams use a predominantly mechanical solution where the navigation and mechanism operation are all achieved through mechanical means. The program for such a mission will just be a simple sequence of blocks driving forward then back to base.

      If there are any competitions you and your team can attend, you will learn more by asking the more accomplished teams to show you how and why they did what they did. They are generally happy to share. If you are in an area with multiple levels of competition and your team did not advance, then go as spectators. We found it easier to learn this way because one is not having to work around a schedule.

      Comment


      • #4
        There is no need for a programming example that is related to a mission. One example is sufficient for every mission in every challenge that has been released. You drive some distance. You make a turn. You drive to a line and stop. You push the robot back against a wall. You push the robot into a model. You drive forward until a touch sensor is pressed. You use the light sensor to follow along a line. You use two light sensors to square the robot to a line. You write a my block for some activity that you do often. You slice a really long program into logical my blocks to make it easier to edit and test. That's pretty much every trick there is in FLL and most of them are covered in the robot educator program examples. Those that aren't are easy to find elsewhere in tutorials or YouTube videos.

        Comment


        • #5
          SkipMorrow, philso , Dean Hystad Thanks all three of you, for your answers. I started coding one challenge tonight, and now I understand what you mean

          Yes I am a rookie coach, not really a coach, just a mom trying to help the kids at our school to understand more of this before the 2019-challenges.
          I know my way around computers, but it has been many years since I did any kind of programming. Now I work with digitalizations-projects.
          That is why I find this interesting, and have a way to contribute to the school with something I find rather fun

          The team participated in their first FLL this year, and was really unprepared, since the teachers did not know much about this either.

          Know I am going to help them learn how to program (and how to think), when solving the challenges in "Into Orbit", so they can be more prepared next year.
          I am using the christmasholiday to learn it to myself first.

          It was not complicated to understand the blockprogramming, so I just has to learn different tips and triks, so I then can learn the kids. The path also will be made while we walk it.
          Really nice to have this forum where I can ask questions. They might be a bit stupid in the beginning, so beare with me

          Comment


          • #6
            In that case, as long as you have access to the table, parts and laptop, I would simply recommend that you try to solve one or more missions. No matter what you do, you should aim for 100 percent repeatability and reliability. If your solution only works 50 percent of the time, you need to study the failure modes of your design and fix it so that you get to 100 percent. Of course, TRUE 100 percent reliability is not really possible, but that should be your goal. My team uses a 10 out of 10 test. If it works ten times in a row, then it is considered reliable enough for our needs. And that's ten out ten NO EXCUSES. If it fails once FOR ANY REASON, including incorrect placement of the robot or forgetting to reset the attachment, then you have fix it so it won't happen again, and restart the 10/10 test.

            Once you start this, you will start to learn what is reliable and what is not reliable on sight, or even upon explanation from the kids, before they even start building it. "You want to try and shoot Gerhard into the airlock from base? Hmm, do you think that will be very accurate and reliable? Can you think of a way to make it more reliable?" You may see that maybe your base robot needs improvement.

            I really believe that reliability is very important for FLL to be fun for the kids. If you let them design a solution that is only 10 percent reliable, it gets frustrating for the kids. They will continually try to change the code. Drive 0.01 revs farther this time. Drive 0.01 revs less this time. Continually chasing an impossible solution, and it gets frustrating. These aren't surgical robots and you can't expect them to drive across the table and be in a perfect position every time. There are a few things you can do to make your programs more reliable, but the vast majority of the reliability comes from the mechanical solution.

            Have fun. Ask other teams questions. Ask US questions. You are in the right place for help!
            Norfolk, Virginia, USA
            FLL Coach and Regional Tournament Head judge since 2014

            Comment


            • #7
              Doing the same thing over and over is practice, not testing, and it is a waste of time. The 10 for 10 test proves nothing other than the mission works on your table with your skilled operators and your controlled setup. It doesn't matter if your robot does the same thing each time it is run. What matters is that it succeeds.

              After you can run a mission twice it is ready for testing. For testing you simulate conditions that may occur during a match. The most likely variation is that the robot is not always positioned the same way for the start of a mission. What happens if it is a bit North or South? How far off can it be and still work? If you can move the start position by an inch and it still works, that is a far better indicator that it can handle running on a different table than having it work 10 for 10. After changing starting position try playing with starting heading, or the position of attachment arms. Try shifting the mat so it isn't centered or is crooked. If you follow walls tape a coin to the wall to simulate a knot. How does the robot handle that? If you are using light sensors try running in the dark or with a really bright light. Make up tests for each thing that might happen at a tournament and do your best to make a robot/solution that can adapt and overcome.

              You will never be able to make missions that work all the time on every table, but you can make missions that adapt to reasonable amounts of variation. The better your robot adapts, the more likely it will succeed. Autonomous robots do not succeed by doing the same thing over and over. They have to adapt to changes in the environment. FLL robots are supposed to be autonomous.
              Last edited by Dean Hystad; 12-27-2018, 04:09 PM.

              Comment


              • #8
                True. But we have easy access to one table and we have one set of operators. So we do the best we can with that. Occasionally we have other drivers step in, but normally the team selects the drivers and we stick with that. Does that set up a single point of failure? Of course. So far we have been lucky. Some day it will bite us.

                When we get access to another table, there are always missions that suddenly don't work. Fortunately, normally it's a small tweak here or there and the mission seems reliable again. Often it identifies a new failure mode that we just hadn't seen before, and the mission becomes even more robust. Sometimes it requires some building, but not very often at all.

                And then it's tournament day. Get to the practice table and bam! All of our missions work! Or none of them! We get it fixed or we rest on our laurels. Then we get to our practice round and then it is something else. And then to our first official round which is on yet another table and by now we are getting closer to having all of the missions working again. Or not. Oh, this table has tape on the wall here. Or a nail head in a critical spot.

                It's all a part of the game. You test, make it as reliable as you can. Try to think of failure modes that haven't bit you yet, and overlook some others. You find a new way to test. New table perhaps? Put a book under one leg of the table? Move the mat slightly north/east/west? Different lighting? You do what you can, and have fun! It's all a part of the effort.

                The 10/10 test is only one part of our quality control. It's better than a 1/1 test for sure! I don't think of the 10/10 test as practice, but I know it is. The human drivers are a part of the reliability. Every season I have kids that don't want to put hard stops on their attachments. They think they can manually set it exactly where it needs to be each time. They find out quickly that isn't the case quite often. The 10/10 test will usually make it apparent.
                Norfolk, Virginia, USA
                FLL Coach and Regional Tournament Head judge since 2014

                Comment


                • #9
                  When you run your 10/10 test you are testing for variability. If all conditions are the same every time the robot will run a program exactly the same way every time and it will succeed every time. Of course all conditions are never the same run to run so every run has some variability. If you are able to run the mission 10 times and it works 10 times that means you have not exceeded your "variability envelope". Your mission was able to adapt to changes in starting conditions and the environment and still work. The problem with this type of testing is you end up with no understanding about what your "variability envelope" is. How was this run different than the one before or from runs on another day? Were your operators really on top of their game when they got 10/10, or were they tired or jumpy? Was it a sunny day or a cloudy day? Was it warm or cool? Was the table set up properly? It is a terrible thing to optimize a mission to the wrong conditions. There is always variation in every run, but if you try to run missions the same way every time you leave the variation up to luck. Why do that when you can control most of the variation and see how the robot responds?

                  Other than leaving what you are testing up to chance, 10/10 testing is bad because it is really inefficient. I see teams run their mission 10 times (some use 5 times) and record how often it failed. They make a small mod to the mission and run 10 times to test. It may take 15 minutes to run the robot 10 times and about 2 minutes to make the small mod. You just spent 17 minutes on one small problem, and chances are you guessed wrong on what the problem was. When you test using controlled variation you know what the problems are. The mission has to start with the robot in the perfect position or the perfect angle or the A attachment has to be 1/4 inch above the table but no higher or lower.

                  Finally, the biggest reason why 10/10 testing is bad is because your missions never really become robust. The more you run the robot the better you get at controlling the variables. You get better at positioning the robot in base. You get better at setting up the attachments. You get better at using the same method for starting the mission. You (the operator) get better and this lets your robot get worse. Having highly trained operators sounds like a great thing, but their skills may only work for their practice table. When judging I'll sometimes ask who does the worst job running the robot (not in those words) and then explain to the team why that person is extremely important and a big reason why their robot is working so well. If you aren't going to use controlled variation at least test missions with different operators. My girls often had me run the missions. When dad could run the robot you knew it was working really well (Dad is an unbelievably bad operator. Whether this is intentional or not is yet to be determined).
                  Last edited by Dean Hystad; 12-27-2018, 12:47 PM.

                  Comment


                  • #10
                    Originally posted by SkipMorrow View Post
                    In that case, as long as you have access to the table, parts and laptop, I would simply recommend that you try to solve one or more missions. No matter what you do, you should aim for 100 percent repeatability and reliability. If your solution only works 50 percent of the time, you need to study the failure modes of your design and fix it so that you get to 100 percent. Of course, TRUE 100 percent reliability is not really possible, but that should be your goal. My team uses a 10 out of 10 test. If it works ten times in a row, then it is considered reliable enough for our needs. And that's ten out ten NO EXCUSES. If it fails once FOR ANY REASON, including incorrect placement of the robot or forgetting to reset the attachment, then you have fix it so it won't happen again, and restart the 10/10 test.

                    Once you start this, you will start to learn what is reliable and what is not reliable on sight, or even upon explanation from the kids, before they even start building it. "You want to try and shoot Gerhard into the airlock from base? Hmm, do you think that will be very accurate and reliable? Can you think of a way to make it more reliable?" You may see that maybe your base robot needs improvement.

                    I really believe that reliability is very important for FLL to be fun for the kids. If you let them design a solution that is only 10 percent reliable, it gets frustrating for the kids. They will continually try to change the code. Drive 0.01 revs farther this time. Drive 0.01 revs less this time. Continually chasing an impossible solution, and it gets frustrating. These aren't surgical robots and you can't expect them to drive across the table and be in a perfect position every time. There are a few things you can do to make your programs more reliable, but the vast majority of the reliability comes from the mechanical solution.

                    Have fun. Ask other teams questions. Ask US questions. You are in the right place for help!
                    Just running a mission some number of times is not useful as Dean has expounded on at length. Running missions and analyzing what goes wrong is what will improve the probability that is will work over a variety of conditions. One can also extrapolate to estimate what variations one should accommodate. It is also useful to introduce some variations to simulate what can be seen in the real world. Use the success rate to evaluate the effectiveness of the changes to the solution.

                    Aim to design solutions that are tolerant of variations but don't pass up opportunities to minimize the variations that can occur. It takes some judgement and experience to decide what variations are reasonable and what variations are unlikely. If the solution can mitigate a sufficient amount of variability, the solution can work on multiple tables with a high probability of success.

                    Comment


                    • #11
                      Originally posted by Dean Hystad View Post
                      When you run your 10/10 test you are testing for variability. If all conditions are the same every time the robot will run a program exactly the same way every time and it will succeed every time. Of course all conditions are never the same run to run so every run has some variability. If you are able to run the mission 10 times and it works 10 times that means you have not exceeded your "variability envelope". Your mission was able to adapt to changes in starting conditions and the environment and still work. The problem with this type of testing is you end up with no understanding about what your "variability envelope" is. How was this run different than the one before or from runs on another day? Were your operators really on top of their game when they got 10/10, or were they tired or jumpy? Was it a sunny day or a cloudy day? Was it warm or cool? Was the table set up properly? It is a terrible thing to optimize a mission to the wrong conditions. There is always variation in every run, but if you try to run missions the same way every time you leave the variation up to luck. Why do that when you can control most of the variation and see how the robot responds?

                      Other than leaving what you are testing up to chance, 10/10 testing is bad because it is really inefficient. I see teams run their mission 10 times (some use 5 times) and record how often it failed. They make a small mod to the mission and run 10 times to test. It may take 15 minutes to run the robot 10 times and about 2 minutes to make the small mod. You just spent 17 minutes on one small problem, and chances are you guessed wrong on what the problem was. When you test using controlled variation you know what the problems are. The mission has to start with the robot in the perfect position or the perfect angle or the A attachment has to be 1/4 inch above the table but no higher or lower.

                      Finally, the biggest reason why 10/10 testing is bad is because your missions never really become robust. The more you run the robot the better you get at controlling the variables. You get better at positioning the robot in base. You get better at setting up the attachments. You get better at using the same method for starting the mission. You (the operator) get better and this lets your robot get worse. Having highly trained operators sounds like a great thing, but their skills may only work for their practice table. When judging I'll sometimes ask who does the worst job running the robot (not in those words) and then explain to the team why that person is extremely important and a big reason why their robot is working so well. If you aren't going to use controlled variation at least test missions with different operators. My girls often had me run the missions. When dad could run the robot you knew it was working really well (Dad is an unbelievably bad operator. Whether this is intentional or not is yet to be determined).
                      Have you ever watched NASCAR or F1 pit stops?

                      Comment


                      • #12
                        Originally posted by philso View Post

                        Have you ever watched NASCAR or F1 pit stops?
                        Enough to know the pit crew doesn't pick up the car and carefully aim it down the track.

                        I have yet to get a pit pass, but some of my co-workers have. I was in Maranello Italy at the Ferrari wind tunnel during a formula 1 race. They had a bunch of screens showing what was happening on the track and graphs showing some of the data being collected by the car. It was impressive even though I understood very little of it. When I was younger I used to pit for my uncle's dirt track modified stock car. We didn't have screens and data streaming from the car (or a wind tunnel).
                        Last edited by Dean Hystad; 12-27-2018, 02:59 PM.

                        Comment


                        • #13
                          Originally posted by philso View Post

                          Just running a mission some number of times is not useful as Dean has expounded on at length. Running missions and analyzing what goes wrong is what will improve the probability that is will work over a variety of conditions. One can also extrapolate to estimate what variations one should accommodate. It is also useful to introduce some variations to simulate what can be seen in the real world. Use the success rate to evaluate the effectiveness of the changes to the solution.

                          Aim to design solutions that are tolerant of variations but don't pass up opportunities to minimize the variations that can occur. It takes some judgement and experience to decide what variations are reasonable and what variations are unlikely. If the solution can mitigate a sufficient amount of variability, the solution can work on multiple tables with a high probability of success.
                          Every time I am with a group of coaches I hear advice about using jigs. In general I don't like jigs much because they get used the wrong way. A jig should never be a crutch that you depend on to make a mission work, because when jigs are used that way they don't work. Often the jig reduced the mission variables just enough that it could work pretty reliably at home. This may hide the fact that your mission is not very robust. Because the mission worked all the time at home under carefully controlled conditions you go to the tournament with great confidence until the first run where that super reliable mission doesn't work any more.

                          The time to use a jig is after you have a reliable mission and you want to make it more reliable by starting the robot in the middle of your "variation envelope" or because you want to save time positioning your robot in base. Reserving jig use for missions that don't need a jig sounds goofy, but it the kind of goofy thinking that leads to really good solutions.

                          Occasionally coaches hunt me down (like a dog) at tournaments to talk about how their team is doing and what they can do to be better. A significant number of these conversations start out with "When I first started reading your posts they made me really angry." This would make me sad except it is usually followed by "Once we began to understand about reliability and adaptability and what we could do to limit ways the robot can fail the kids started enjoying programming a lot more and changed how they designed missions and we are having a lot more fun than before." As long as I keep hearing "having a lot more fun than before" you can count on hearing my goofy opinions.
                          Last edited by Dean Hystad; 12-27-2018, 03:30 PM.

                          Comment


                          • #14
                            Originally posted by Dean Hystad View Post
                            Doing the same thing over and over is practice, not testing, and it is a waste of time. The 10 for 10 test proves nothing other than the mission works on your table with your skilled operators and your controlled setup. It doesn't matter if your robot does the same thing each time it is run. What matters is that it succeeds.

                            After you can run a mission twice it is ready for testing. For testing you simulate conditions that may occur during a match. The most likely variation is that the robot is not always positioned the same way for the start of a mission. What happens if it is a bit North or South? How far off can it be and still work? If you can move the start position by an inch and it still works, that is a far better indicator that it can handle running on a different table than having it work 10 for 10. After changing starting position try playing with starting heading, or the position of attachment arms. Try shifting the mat so it isn't centered or is crooked. If you follow walls tape a coin to the wall to simulate a knot. How does the robot handle that? If you are using light sensors try running in the dark or with a really bright light. Make up tests for each thing that might happen at a tournament and do your best to make a robot/solution that can adapt and overcome.

                            You will never be able to make missions that work all the time on every table, but you can make missions that adapt to reasonable amounts of variation. The better your robot adapts, the more likely it will succeed. Autonomous robots do not succeed by doing the same thing over and over. They have to adapt to changes in the environment. FLL robots are supposed to be autonomous.
                            I don't see how that's remotely helpful for julnil. She clearly said: "The team participated in their first FLL this year, and was really unprepared, since the teachers did not know much about this either." She's starting from scratch. How is she supposed to "simulate conditions that may occur during a match"? So then you go on to say that "the most likely variation is that the robot is not always positioned the same way for the start of the mission." One obvious solution to that is some sort of jig, but later on you say "in general I don't like jigs because they get used the wrong way." Then you go on to suggest "the time to use a jig is after you have a reliable mission." But at this point, you've completely changed the subject from where julnil started, and you haven't provided anything at all that's helpful for her.

                            So, regardless of whether your posts make people "really angry" or whatever, what do you suggest she can actually do as a mom who wants to help her team for next year?

                            Comment


                            • #15
                              Originally posted by brian@kidbrothers.net View Post

                              I don't see how that's remotely helpful for julnil. She clearly said: "The team participated in their first FLL this year, and was really unprepared, since the teachers did not know much about this either." She's starting from scratch. How is she supposed to "simulate conditions that may occur during a match"? So then you go on to say that "the most likely variation is that the robot is not always positioned the same way for the start of the mission." One obvious solution to that is some sort of jig, but later on you say "in general I don't like jigs because they get used the wrong way." Then you go on to suggest "the time to use a jig is after you have a reliable mission." But at this point, you've completely changed the subject from where julnil started, and you haven't provided anything at all that's helpful for her.

                              So, regardless of whether your posts make people "really angry" or whatever, what do you suggest she can actually do as a mom who wants to help her team for next year?
                              First off, julnil is no longer a rookie, so this is not "starting from scratch". There is already one season of experience to show that what the team was doing didn't work very well. I'm guessing there was a lot of aiming and hand positioning of attachments and missions that rarely succeeded. I bet there was a lot of running the same program over and over and making little changes to where the robot was positioned in base. I'm also guessing this was done in a haphazard manner with no plan, no process, and nobody recording any results. In other words the way most rookie FLL teams do things their first season. That is what rookie seasons are for. You hopefully learn that just driving out and poking things doesn't work. You see some teams that have really impressive robots, and you see other teams that have simple robots but still score a lot of points. You see teams doing strange things like backing into walls or stopping over the lines on the mat. You see that a lot of teams are using sensors. You learn a lot during your rookie season.

                              After your rookie season you know enough to ask questions. When you ask questions you get a lot of advice. You are still mostly rookie, so sifting through the advice is as difficult as designing missions. I think a lot of FLL advice is bad. I don't think it is intentionally bad, or that the people that pass it on are bad. I think that a lot of the bad advice even works to a degree. Using a jig is a lot better than aiming by hand. Using matched motors is better than using motors with different characteristics (who has enough motors to do this????). Using a check list will give better results than no documentation on how to run missions. All of these are good things, but if you are a rookie coach, or even a lot of experienced coaches, you might use them for the wrong reason.

                              A jig can make a mission that fails most of the time start working half the time or even most of the time. A coach will look at that and say that jigs are great. If we make a better jig maybe the mission will work all the time. It is more likely that the mission has some flaws that are hidden by the jig. Fixing the flaws may eliminate the need for the jig and will certainly result in missions that are more reliable. So jigs aren't bad, but they can have bad side effects. You may never learn how to make missions that are really reliable because you were able to limp along using a jig.

                              So Skip offers up a bit of advice about testing. Teams should test their missions, no refuting that. The problem with the advice is you don't learn anything from a mission that works.
                              Running the same mission starting from the same place over and over only tells you that the mission works when there aren't any changes or problems. Anyone who's done FLL knows that this is not how things work. Things change when you go from one table to another. Things change day to day. Some things change minute by minute. If everything could be controlled to always be the same there would be no reason for testing. We don't have that kind of control, so our testing has to introduce changes and see how the robot responds. A good mission test will have a plan. "We are going to adjust the starting position east and west until the mission starts to fail all the time. We will carefully control the starting position so we know how far it is from the ideal. We will record each time it fails and where it fails." After you run the test you can analyze the data. "The mission was pretty reliable until we moved the starting position 1" to the East or 1/2" to the West. The mission failed because it bumped into the space station." The analysis should lead to some conclusion "Driving near the space station is the weakest part of this mission. Can we use another route? Can we use the line by the Escape Velocity model?" You would modify the mission based on the conclusion and test again. After the test you may decide that the mission works well enough "The mission works pretty well if the robot position is off less than 1/2". If we move the starting position a little bit to the East I think it will work almost every time, especially if we work that position into our starting jig.

                              I like doing both development and testing at the same time, and I start out by the model. Why write a program to drive from base out to a model if the attachment doesn't work when you start only 6" away? If I wanted to solve the extraction mission I would design an attachment and write a little program that starts right next to the extraction model, moves the attachment and pulls the samples off the axle. If this worked a couple times I might move the starting spot (by the model) a little bit North or East and see how far the robot can be off and have the mission still work. If the "variability envelope" is small I might redesign the attachment so it works over a larger area or I might think about having the robot bump into the model or use a line by the model. After I get things working with the robot starting out by the model I would work my way back toward base. For something close like extraction my next step might be starting from base. For catching the lander I might pick some via points that are easy to get to and identify. Bumping into the aerobic exercise model while driving against the North wall is a good way to know where you are. Or maybe I would have better luck seeing the moon while driving North along the East wall. But the process is pretty much the same at each via point until I get all the way back to base and have a nice reliable mission that is insensitive to all kinds of little bumps and misalignments.

                              Now julnil has heard for a design judge with 19 years of experience who has coached or mentored dozens of teams that running the same mission over and over without changing anything teaches you very little and that it is better to make the robot adapt to variation than it is to try to control variation. I even tossed in my philosophy of starting to develop missions out by the mission model and working backward toward base. I've seen teams in their 4th season of FLL that don't know those lessons. I think that kind of thing is useful.
                              Last edited by Dean Hystad; 12-27-2018, 07:02 PM.

                              Comment

                              Working...
                              X