Announcement

Collapse
No announcement yet.

What high scores & score distribution do you expect to see at Qualifiers?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by Dean Hystad View Post
    This thread is supposed to be about expected scores at qualifiers. Actual scores should be in a different thread. State tournament scores another thread. Could we please stick with the topic.
    Can I publish our scrimmage scores here? LOL

    Comment


    • #62
      Originally posted by Dean Hystad View Post
      You can try to minimize the effects by carefully checking the table each time, or you can design your solution to be as tolerant of variation as possible.
      I think the issue is that you're being awfully dismissive about the "variation" of the mat not being correctly positioned. You're both saying that it's "an absolute" that it should be positioned correctly while also saying the kids should have somehow anticipated that it wouldn't be positioned correctly and should have programmed accordingly.

      However, if the mat used in a tournament were printed incorrectly so that the black bars weren't really black and the white bars weren't really white and a color sensor became worthless, I don't know that you would be as cavalier about that "variation" and say that the kids should have anticipated that the colors wouldn't be correct.

      Comment


      • #63
        At our qualifying tournament here in Michigan, one team got 172, the next team got 113, 3rd was 108, 4th was 100, and the other 44 teams below 100.

        Comment


        • #64
          Originally posted by brian@kidbrothers.net View Post

          I think the issue is that you're being awfully dismissive about the "variation" of the mat not being correctly positioned. You're both saying that it's "an absolute" that it should be positioned correctly while also saying the kids should have somehow anticipated that it wouldn't be positioned correctly and should have programmed accordingly.

          However, if the mat used in a tournament were printed incorrectly so that the black bars weren't really black and the white bars weren't really white and a color sensor became worthless, I don't know that you would be as cavalier about that "variation" and say that the kids should have anticipated that the colors wouldn't be correct.
          It is highly unlikely that a bunch of inverted color mats are released into the wild, but I know that it is a certainty that you will not be competing on your practice table. The table at the tournament will be different. It may be a slightly different size. The walls may be a different height. The mat might be a slightly different size. The mat colors may be slightly different. The mat may have ripples or creases, or it might be flatter than your mat. The walls may be smoother or rougher. The table may be flatter or bumpier. Not only will the table be different, but the environment will be different. There will be a lot of nerves at the tournament. There may be a lot of noise that could affect communication. Lighting conditions may be different making it hard to read the display and potentially changing light sensor readings. Running the robot at the tournament when it really counts is going to be a lot different than a practice run in your meeting room.

          Team cannot control all of the changes. You can ask for the field to be set up correctly and for the mat to be correctly positioned, but that is about it. That covers little of the variation you will see. You can hope that will be enough but I think it is better if your team attempts to introduce variation while they are working on their missions and using what they learn to make their solutions more robust. If you always practice your missions under exactly the same conditions how will you know what will happen if those conditions change? If you change the conditions and the mission fails, well at least you learned something. Maybe your team will see the how the mission failed and figure out a small modification to the mission to prevent the failure. Push the mat over into the NE corner and see what happens. Should the mat be in the NE corner? No. But running with the mat in the NE corner can simulate a wider gap along the West wall, and that can happen. You might also find a way to make the missions insensitive to a gap along the South wall. I know that gap isn't supposed to be there, but now you're ready just in case it is.

          More importantly, introducing variation with the mat position introduced the concept of variation. Like I said there are going to be a lot of differences between you table and the table at the tournament, and there is nothing that can be done about most of those differences. If you make a little change to the field and missions start failing you might spend some time investigating why the missions are so dependent on the field conditions. You might learn that odometry (depending on accurate driving) is not a good way to navigate and that might lead to the team using different types of solutions that are more reliable. That in turn will lead to using more sensors and using more of the programming language. When you go to your tournament your robot will score the same points as it does at home and your team won't be frantically tweaking their missions. They will discuss all they learned in design judging and get a good evaluation. They will use all their robot tweaking time to visit with other teams.

          I've seen way too many teams not having much fun at FLL tournaments. I don't like it when teams don't have fun.
          Last edited by Dean Hystad; 12-07-2018, 12:19 PM.

          Comment


          • #65
            Originally posted by Dean Hystad View Post

            I am not being cavalier at all. If anything those who think the table will always be right are being defeatist. You are saying your team's scores are beyond their control.
            Not necessarily what is being said....

            Originally posted by Dean Hystad View Post
            You are saying nothing can be done to compensate for any differences between the table at the tournament and the table at home. You are saying all your team can do is ask the ref to fix some setup problems and hope for the best. I think that is bad thinking.
            Not necessarily what is being said.

            Originally posted by Dean Hystad View Post
            Teams work hard on their solution. This is true for solutions that are robust or solutions that are likely to fail. I think teams are much happier when they spent their time working on solutions that are likely to succeed at the tournament. Designing to fail is no fun.
            Eh, but the score shouldn't be that important (though I know it is hard to dissuade the kids), and if they are getting upset or not having fun over the score outside of unexpected catastrophic failure (somebody dropped a robot, or a connection came loose), something is probably wrong.

            Also small failures can teach lessons, and if you're providing guidance, and the guidance is ignored, that lesson will get learned all the same, but it may not come with as much unexpected sadness....

            Originally posted by Dean Hystad View Post
            Not only will the table be different, but the environment will be different. There will be a lot of nerves at the tournament. There may be a lot of noise that could affect communication. Lighting conditions may be different making it hard to read the display and potentially changing light sensor readings. Running the robot at the tournament when it really counts is going to be a lot different than a practice run in your meeting room.
            So true.

            Originally posted by Dean Hystad View Post
            More importantly, introducing variation with the mat position introduced the concept of variation.

            I've seen way too many teams not having much fun at FLL tournaments. I don't like it when teams don't have fun.
            They should be able to have fun without it doing exactly what they expect or scores, and given that 98% of the teams will not compensate for most of the variation that they invariably run into , I should think that the kids should take home lessons about *WHY* did their robots behave that way during given runs.

            That's the element of gracious professionalism and scientific inquiry hopefully we are instilling. If everything goes off without a hitch, that's great, but there is no such thing as a perfect run.

            My .02, I hope that comes through ok.

            Brian, I'm in the Det area, I see you were at a SE MI qualifier, feel free to drop a line if you're doing this again in the future.
            Last edited by altCognito; 12-07-2018, 11:35 AM.

            Comment


            • #66
              We had our qualifier today. They run three tournaments in one location, each with 16 teams. We had a rough start, but my kids pulled together and had a good run on their last time at the table, and moved to the top of the scoreboard. Scores were 246, 202, 185, 178, 136, 133, 96 ... The other two tournaments were similarly distributed: one was 234, 134, 115, 98 ... and the other was 186, 177, 164, 120, 113, 109, 76 ...

              Comment


              • #67
                Top scores at a 32 team qualifier in Minnesota yesterday were 236 and 132. A few other teams broke the century mark, but everyone else was between 0 and 100. As a judge, I saw a little greater variety of missions at this tournament than the one I attended last month.

                Comment


                • #68
                  Today I saw more teams getting the scores they expect (as opposed to last week were many teams were getting half their expected points). Still nothing in the 300's but 150+ is not uncommon and a few 200+ are sneaking into the mix. There are also a lot of scores less than 50, but those teams were expecting low scores.

                  I did see one really cool thing that made me smack my forehead. Inline "floating style" comments! They really stand out, don't make the program much longer, and follow along when blocks are added or removed. The team didn't even realize how great their idea was. I gave them an innovative design award.

                  Capture.JPG
                  Last edited by Dean Hystad; 12-10-2018, 01:53 PM.

                  Comment


                  • #69
                    The top two scores at our regional in Iowa this weekend were 266 and 111. The rest of the top scores were under 100.

                    Comment


                    • #70
                      Originally posted by Dean Hystad View Post
                      Today I saw more teams getting the scores they expect. Still nothing in the 300's but 150+ is not uncommon and a few 250+ are sneaking into the mix. There are also a lot of scores less than 50, but those teams were expecting low scores.

                      I did see one thing really cool today that made me smack my forehead. Inline "floating style" comments! They really stand out, don't make the program much longer, and follow along when blocks are added or removed. The team didn't even realize how great their idea was. I gave them an innovative design award.

                      Capture.JPG
                      OK, I'm totally stealing that for the post season 'What can we do better next year' rundown.
                      Coach, FLL Team 3146 Peace By Piece 2013 - 2016; Team 29410 The Dragon Bots 2016-2018
                      Judge, FTC 2014-2015; Field Technical Advisor, FTC 2016-2018; Robot Inspector, FRC 2018

                      Comment


                      • #71
                        Originally posted by jasponti View Post
                        The top two scores at our regional in Iowa this weekend were 266 and 111. The rest of the top scores were under 100.
                        That is a huge gap. How many teams were there?

                        Comment


                        • #72
                          Michigan had one of it's two state competitions. Out of 56 teams top scores were 228/226/204/204 and a slew of 100's (15?), I honestly can't remember, there seemed like a lot of them down there.

                          Comment


                          • #73
                            Tim, there were 24 teams and it looked like quite a few young and first year teams. The team that scored 266 was a team in their third year. In past years, the typical high score at this regional has been 120 to 140.

                            Comment


                            • #74
                              Originally posted by jasponti View Post
                              Tim, there were 24 teams and it looked like quite a few young and first year teams. The team that scored 266 was a team in their third year. In past years, the typical high score at this regional has been 120 to 140.
                              120 to 140 from previous years is probably about 80 to 100 for this challenge. Most recent challenges had more points available and more easy points near the base.

                              Comment


                              • #75
                                Originally posted by altCognito View Post
                                Michigan had one of it's two state competitions. Out of 56 teams top scores were 228/226/204/204 and a slew of 100's (15?), I honestly can't remember, there seemed like a lot of them down there.
                                Do you happen to know what the team that advanced to World Festival received on their robot score? Thanks.

                                Comment

                                Working...
                                X