Course Description

Science Research Program – Grades 9, 10, 11, 12

Full Year – Level H: Incoming freshman and sophomore students must apply for admission to this program in January of the prior academic year. Acceptance into the program will be based on a science teacher recommendation, a written essay, and excellent academic grades.

This is an ongoing program that is taken in conjunction with the student’s regular science course. There are several tiers to the program. During Year 1 students learn the components of scientific research including the scientific method and apply these concepts in various settings including designing and conducting an authentic science research project and communicating results by participation in at least one local science fair. Students also explore various applications of science topics through field trips, guest speakers and class projects. Advanced students (Years 2-4) select their science research topic, locate an out-of-school mentor (either in industry or at a local university) and compete in a variety of science fairs including the CT State Science Fair, Southern CT Invitational Science and Engineering Fair (SCSEF) and the CT Junior Science and Humanities Symposium (CT JSHS). Advanced students pursue their selected research in depth, perform statistical analysis and compete at a number of local and/or national science fairs and competitions. In Years 2, 3, and 4 students are grouped together in a non-traditional classroom setting and are required to meet individually outside of class with their Science Research Instructor biweekly to review individual goals and assess progress. All students participate in the culminating annual activity, Amity’s Science Symposium.

Wednesday, April 9, 2014

Trancendent Man Video - Mr. Lazzaro's Class

Please answer the following  questions in a brief essay.

Is the singularity inevitable?  If so, when will it occur?  If not, why not?

Will the development of AI and related technologies ultimately be positive or negative?

Should we proceed with the development of AI, brain implants, and related technology?  Is it simply part of the evolutionary process?


  1. Kathleen Walsh
    April 10, 2014
    I believe singularity is possible. Singularity is when technology will surpass human intelligence and I believe it will because we already have the precursors for the technology. For example, scientists have already found technology that can repair muscle and bones, the body adapts to the technology growing tissue around it, bringing into place. This effect could be used in the future with technology in our brains. Kurzweil believes this event will happen in the year 2045, I believe it is possible to happen in 2045, because of the extreme exponential growth technology follows, also humans are “obsessed” with the newest technology. For example, when the iPhone came out, everyone wanted one; the same rules will apply with the singularity.
    I believe there are positive and negative effects to artificial intelligence. Positive effects will be production and health benefits. People will be able to live longer and keep their body new and healthy. Also, factories will be able to increase their productivity and have an increase in economic growth. However, there are many negative effects too. If artificial intelligence surpasses human or natural intelligence, humans will not be able to control the new power source. This could lead to even more problems we believed would happen.
    I believe we should stop the devolvement of artificial intelligence because the process of evolution is a natural process. Evolution occurred through coding in our DNA, by changing the DNA of our body and creating an artificial species it becomes unnatural. We have been able to live over 300 years without an increased intelligence, why can’t we continue the natural lifestyle?

  2. “Transcendent Man"
    By James He

    Considering how advanced technology has grown to be in the past couple years, the singularity appears to be inevitable. Already, simple machines like iPhones and computers are already so smart in how fast they can calculate math or other subjects. Furthermore, humans have evolved in a way where a huge amount of people in the world have become dependent on these electronic devices. Thus, the idea of technology surpassing human intelligence has already been encouraged by so many and been heavily researched by renowned scientists like Ray Kurzweil. Due to the way technology grows exponentially, these researchers will definitely discover a method of increasing artificial intelligence to the point where human intelligence becomes outdated in a way. Judging from the way people are acting at least in America, the singularity will happen in 2045 at the very latest.
    The introduction of new innovations to our world will always come with a set of positives and negatives attached to it. For the issue of technology, I believe our world has changed in a way that the positives will always outweigh the negatives until it overtakes human life completely. I believe the advances in technology that we’re working on now are rather extreme in the purpose they wish to fulfill. It appears too risky for artificial intelligence to fully outpace human intelligence and still be safe to control and use to its fullest capability. Furthermore, the reliance on technology and decreasing emphasis on school education is already a factor in lowering human intelligence, making it easier for technology to overtake us. Also, the idea that we can plug a computer chip in our bodies and control a computer based on the movements that we make is rather hard to comprehend completely. Ultimately, robots gaining a mind of their own doesn’t seem to be as far-fetched now since we’re thinking about the singularity. Therefore, I believe the exponential development of the AI will end poorly in a war of man vs machines.
    As of right now, research regarding the development of new technologies and the AI should be encouraged but be kept on a lower media scale. This growth in technology can’t be an evolutionary process as evolution is natural for humans. It includes adapting to the environment around us in a natural manner that has become a characteristic of all living things. Technology is simply an artificial tool that we have developed to aid in daily and complicated actions, so it can’t be categorized with the process of evolution. Based off of the definition I gave of technology just now, I don’t think any technology that will replace human interaction or human action ever be introduced to the entire world. Going back on the statement I made earlier about continuing research on the development of new technologies, I believe the only advances we should continue to research are those that can help us in everyday life and make human life an easier process to go through. For example, self-parking cars would be a good idea as they will reduce car accidents and make it easier on humans, saving time and effort. Ultimately, technology is bound to grow, but how we choose to factor it into our lives is our choice, and I believe it should be kept out if our purpose in technology is to surpass human intelligence.

  3. Neha Pashankar

    The singularity is not inevitable. Singularity is the process when artificial intelligence surpasses the human intelligence. It’s not possible because artificial intelligence could never think creatively, the way that humans can. According to Merriam Webster, intelligence is defined as the ability to learn or understand things or to deal with new or difficult situations. AI could not deal with new or difficult situations like riddles, or puzzles or creative thinking. They also wouldn’t be able to have feelings, because they could never have a natural conscience. I think that your conscience and “gut” feeling is how many people live and experience things today. People learn subconsciously through every single experience they go through and robots wouldn’t be able to learn from half as many experiences humans can. Also, subjects like English, where the language is passionate and “real”, and robots would not be able to surpass humans. They would not be able to write ardently and passionately the way that humans can. English can be a measure of intelligence, and if robots wouldn’t write the way that humans do, they wouldn’t exceed us.
    I do think that AI will surpass people in more straightforward tasks like arithmetic, science-related problems, remembering information, exercise and many more. But all of these things measure intelligence in a limited way, there’s so much more to intelligence and computers wouldn’t be able to emulate humans.
    The development of AI and related technologies will ultimately be negative. Considering both sides, I do think there are countless benefits but, overall I don’t think they will be positive. I believe that it would go against evolution and natural selection. Natural selection is the he process by which plants and animals that can adapt to changes in their environment are able to survive and reproduce while those that cannot adapt do not survive. If everyone survives, because anyone who has an unfavorable adaption can be changed, then the world would suffer an even larger population problem than it does today. I also believe that it will cause numerous controversies between religion and science, even more than today. These disagreements could lead to wars, or huge disagreements.
    I don’t think that we should proceed with the development of AI, brain implants, and related technology. I think that instead of working on unnecessary technologies that people still survived without 100 years ago, scientists and engineers should work on the world problems today like making more vaccines, making those vaccines available for everywhere, finding a cure for cancer, helping the overgrowing population and many more. Homo sapiens have been on this earth for 200,000 and they lived without all of this. It is not simply part of the evolutionary process because humans have been told that they are the end of evolution. Also, evolution is a natural process, with the exception of artificial selection which is used for animals, and making AI simply isn’t natural.
    Neha Pashankar

  4. Nicholas Yoo
    Mr. Lazzaro
    Science Research Period 1
    10 April 2014
    Science Research Blog on Technology
    The singularity is inevitable unless governments or another power stopped all technological advances. This is because all modern technology such as computers has been exponentially growing. The singularity will probably happen in 2062 because there are many precursors to such technology. AI today is already very smart. For example, in video games the enemy AI can be very intelligent and outsmart you. Playing chess against a computer on a high level is very difficult. If a computer can outsmart someone in a game, and technological advances are growing exponentially, then it is safe to think that AI can evolve to outsmart people in other activities.
    The development of smarter AI can be both positive and negative. It can be used positively by installing it into a robot that can do chores around the house like cleaning or cooking, it can be used to fight wars without risking soldiers’ lives, and it can be used for ease of access like saying to your house, “Door, open” and then the house recognizes your voice and the door opens for you. The negative effects can be the AI taking over all computers like in many sci-fi movies and everything turning into an automated process and there would be little human interaction anymore.
    I believe that we should continue with improving AI and other related technologies, but up to a point. AI taking over the world may seem outlandish and unlikely to happen, but who would want to take the chance? If robots with smart AI replaced police, it would be awful if the police robots suddenly turned on humans and started killing us all. I do not believe this new technology is part of the evolutionary process because evolution is normally something that happens inside the living being, and not the living being developing something to make themselves better. A person that takes steroids to enhance themselves are not evolved.

  5. Jasmine Moon
    2014 April 10

    Singularity is when technology surpasses the human intelligence. It is inevitable because technology is constantly becoming more advanced. Today, we have Iphones that are technically computers that can fit in our pockets. Bionic body parts are also invented, so if people have these things implanted into their bodies, then technology would be living in our bodies. We are getting closer to singularity step by step. I think this will occur in the near future. Ray Kurzwyl predicted that this would happen in the year of 2045. The only way singularity will be not inevitable, is if a law were to be in place or if a sufficient amount of people will strive to stop it. But this probably won’t happen because we already have made bionic body parts, and that is accepted to help people. Making a complete robot human may not be that different.
    The development of related technologies to artificial intelligence will be ultimately negative. This is because there will be numerous amounts of arguments about this such as whether it is moral or what would happen if we can no longer control the artificial intelligence. Also, Kurzwyl said that he was trying the find a way for people to live forever along with making singularity possible. Well, at what age will people live forever at? What if they want to ultimately rest in peace?
    I think the development of AI will continue, but I personally do not think the development should continue. Although some bionic body parts like bionic eyes and bionic ears help people, especially blind and deaf people, I think it’s just not right. Humans like us right now may not be able to live on the planet anymore and be replaced with robots. I do not think singularity will be part of the evolutionary process because evolution occurred because of nature. Robotic humans simply are not natural.

  6. Marissa Della-GiustinaApril 10, 2014 at 8:15 AM

    April 10, 2014

    Singularity is likely inevitable because technology is constantly changing and becoming more advanced. The technology humans are constantly inventing is increasing in complexity and their intelligence. For example, a new robotic hand, similar to a human hand, was shown in the video. It moves whenever a human hand moves in front of this, so this is a very highly advanced technological machine. If humans keep inventing these new robotic technologies and machines, then the singularity will occur. These technologies are known as precursors for the singularity, where technology intelligence is becoming more close to surpassing human intelligence. As of now, with the new robotic hands and brain implants, robots are coming closer to align with humans, then once they reach that point, they will begin to surpass us. However, I don’t think it will happen so closely in the future, 2045 to exact, as Ray Kurzweil. I think that technology and robots will not surpass humans that quickly because it has already taken so long to get where they are today with technology. I think it may happen not too long after it, but just not that close into the future.
    The development of AI and related technologies will ultimately be positive. These new technologies could help humans with many things. People who lose or are born without limbs would be able to get an artificial arm if it is developed, someone who is blind would be able to have artificial eyes, and the list goes on. It should beneficial to us overall, unless the new technologies try to take over the human race, but the current idea of them is more positive than negative. If they are used for all humans, not just the government and inventors, and are monitored, the development of AI and related technologies will help the human race, mostly those with disabilities, making it positive to the human race. It will also help scientists learn more about AI and related technologies, increasing our knowledge of robotics and technologies.
    The development of AI, brain implants, and related technologies should proceed because it is somewhat part of the evolutionary process. Part of the evolutionary process is survival of the fittest, and those most fit to their environment survive. So by creating these new developments, humans are trying to become more adapted to their environment, so they are the most fit to survive. It should also proceed because it will be beneficial to humans with disabilities and be ultimately positive to the human race. By creating these new technologies, it’ll help the human race increase the amount of people who are most fit to survive in their environment.

  7. Kate Alvarado

    Singularity is defined as the “hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.” That being said, I do believe that the singularity is inevitable. Over the course of history, it’s been in our human nature to continually try to outdo ourselves. We have been trained to keep pushing on, and I think that the singularity will be no different. Technology has been exponentially growing in the past one hundred years since we’ve been technologically advanced, and therefore singularity is closer than we think. We also have many precursors to the singularity occurring. For example, we already have smartphones and tablets that can do virtually anything, and way more than the computers say, fifty years ago. If we compare the technology of today with the technology fifty years ago, we can see how far we’ve come and the strides we have already made towards the singularity. With all of this, I do agree with Ray Kurzweil that the singularity will happen by 2045.

    I think the development of AI and related technologies will ultimately be more positive than negative. I think that the singularity will be beneficial to us, and make our way of life even simpler than it already is. We could have so much information at the tips of our fingers, and it will ultimately help us more than it will hurt us. In contrast, I do not think that AI will surpass humans in things like math or problem solving. I think people will remain superior in these areas, because of the way our brains are wired and able to problem solve. In addition to this, I do believe that there is a downside of the singularity. The singularity will allow humans to live forever, essentially, and I don’t believe that is a good thing in all cases. Overpopulation would become a huge problem, and death is just a way of life. It’s unnatural not to die!

    I think that we should proceed with the development of AI, brain implants, and related technology. It’s simply a part of human nature to want to continue to improve, and therefore I do believe that the development of artificial intelligence is simply a part of the evolutionary process. The development of AI will help us in the long run, and it is inevitable that it will be reached. To conclude, we should proceed with the development of AI, brain implants, and related technology.

  8. Joey Antaya
    Mr. Lazzaro
    Science Research Program Per. 1
    10 April 2014
    Is Singularity Inevitable?

    The singularity is inevitable due to the exponential growth of modern technology. As computers become more advanced, the technology builds upon itself at a faster rate. Ray Kurzweil predicts that the singularity will occur by the year 2045. I think that this is possible, but unlikely. First, the economy in the United States is in a recession, and we still haven't made a full recovery. Funding for government projects will be centered more on basic necessities rather than technological advancements. Furthermore, there may be governments that believe that the singularity will not be ethical, and may pose a potential threat to humanity and basic rights. This ties into the debate whether the development of Artificial Intelligence will be positive or negative.
    Artificial Intelligence will be positive because it can support humanity in manufacturing, agriculture, and other jobs that are vital to the human race. Also, this will create jobs for engineers, programmers, and mechanics. I believe that the singularity is part of the evolutionary process, which is a necessary step in human advancement. Once the singularity is reached, then humans will be able to focus on other tasks, including medical advancements and space exploration.
    On the contrary, this revolution can be just as negative. These robots can replace human jobs by the thousands, leaving a portion of the population unemployed. Also, others hypothesize that robots will take over humans when they become smarter than us. In conclusion, the singularity, whenever it will occur, will change the world indefinitely.

  9. Jackie Snow

    The singularity is inevitable, and it will definitely happen even if someone tries to stop it. As Ray Kurzweil says, which is true, our technological abilities double every two years because we use our brand new technology to make future technology, and this is extremely efficient. In 30 years, according to this law, our technology will be almost 33,000x more powerful than it is now. To reach a time when nanobot technology is common, we would have to wait 54 years to the year 2068, and that is when the singularity will happen. Engineers are already working on human-like robotic technology, beings that are coded to respond to certain stimuli. I believe AI technology will be very beneficial as long as it is highly monitored, and the world sticks to a good code of ethics. However, the chances of this occurring will be very slim. There will definitely be a black market for this technology, and not all countries will use the technology morally. We should definitely continue with the developing these machines (brain implants, etc.). These technologies will help us to survive better in our environment, and this is congruent with Darwin’s theory of evolution. Also congruent with this theory of survival of the fittest, meaning that only the most fit people to the environment, AKA the people using the new technology, will be able to survive.

  10. Sarah S.
    Although I believe that our technology will eventually reach the point where a singularity is ready, I do not believe that we will ever have a widespread use of technology that does surpass the human, due to laws and restrictions. I agree with some points that various speakers made within the Transcendent Man that technology improves exponentially, and that it doubles every two years. Following this data, it does appear that we will eventually reach the singularity. However, I do not agree with Ray Kurzweil’s prediction that the singularity will occur by 2045, but instead it will occur later. The movie briefly showed various new technologies, such as a chip someone put in their arm to control a large robot arm. Already this shows the possibility that we can create complex machines that can act as extensions to ourselves. Truthfully, the largest barrier is simply providing AI with a conscious. However, this comes with opposition as various theories have arisen about the outcome of overuse of AI. One man in the video believed that when AI becomes smarter than the human race, they will consider us as inferior and exterminate us all. I believe that many will think of AI as immoral, dangerous, or unnatural and oppose the idea of widespread use. Thus, I believe that AI will eventually be created, but global laws will prevent the use of AI in the daily lives of humans. I believe that AI will be used, but only within private companies or medical buildings that have obtained special permits.
    I believe that the development of AI and related technologies will overall be positive when used in moderation, but negative when incorporated into a part of daily life. For example, many new technologies, such as nanobots or Pillcams, can be used within the medical industry to perform precise surgeries inside a patient’s body. I believe that if technology can be formed to create false seeing eyes for those who are blind or provide other natural senses (touch, taste, smell, hearing), then this improved technology will be extremely beneficial to many. However, I believe that if AI becomes too incorporated in to our society many negative results could occur. The AI could think of us as inferior and end the human race, or they could simply take many human jobs. If AI surpass human intelligence and develop a creative conscious they could easily take over many jobs such as surgeons, assembly line workers, or even work to build more AI.
    I believe that we should definitely proceed with the development of AI. The medical implications could be immense, but it is essential that AI is not over used, and used fairly among the population. For example, as mentioned before, the wealthy cannot be the only ones who have access to such technology. Otherwise, if there was technology to allow someone to live forever, and only the wealthy could afford this, then a human feud could develop. Even within the past, a financial divide can cause those in poverty to revolt against the wealthy. This occurred during the French Revolution. If the people who cannot afford this technology know that others can use their wealth to live forever, more wars may break out. However, we also cannot provide this technology to everybody; otherwise overpopulation will occur and overall harm the human race. This technology should only be used when necessary, such as when people are physically disabled. Also, I believe that this is somewhat part of the evolutionary process in which we better adapt to our surroundings. If one can grow a lost limb, or create another sense, then human race will simply grow more advanced. For example, in one video we watched, some people had magnets implanted into their fingertips to feel magnetic fields. This could essentially add a sixth sense. The argument that adding senses would throw our natural senses off balance is also present. With the use of AI in moderation with the help of strict laws and regulations, this improved technology could be extremely beneficial to the human race.

  11. Adam Hurwitz
    The singularity is a time when humans will create technology that surpasses us mentally. I believe that the singularity is inevitable. Such a time seems much closer than the general population thinks. There is already a large force pushing the boundaries of the human body, but we are running out of space. The human body is only capable of certain upgrades. In the past, we have found new places to go when we run out of space. For example, by the time we run out of space for the population of Earth, chances are we will have the ability to move large groups of people to the moon. I believe we will approach the same issue when we come to a time when we can no longer upgrade our own bodies. When AI reaches a certain point, we will begin to upgrade it rather than ourselves, strengthening its mind and body when we no longer can do so within our own. The numbers show that technology has grown exponentially throughout the last 100 years or so, so that we will soon be producing new technology so fast that we won’t be able to keep up. The exponential growth of technology suggests that we will surpass our own mental capabilities by 2045.
    I believe the human brain will very soon be able to create something beyond its own means, an artificial intelligence Ray Kurzweil sees coming by 2045. I think this will come when Kurzweil proposes, but will not necessarily look like the Terminator movies with Arnold Schwarzenegger, but will look a lot like us. I think these will be positive beings, and I believe laws will be made to keep control over AI. I think that the only way we will be overpowered by AI would be if people get around these laws and build something that has the mental capability to control us. I believe AI will be positive.
    Yes, this will be a very good thing for the human race, and for the advancement of our technological capabilities. While many people think these improvements are “unnatural”, I believe we are meant to do these things. I think we have known for a very long time that we are a different race than the rest. We are meant to push the bar ahead for what is known and what is not. This is a part of the evolutionary proccess.