You may have heard something along the lines of "computers only really understand 1s and 0s". It is more or less accurate; however, very few, if any, modern computer scientists write their computer code as long streams of 1s and 0s. Instead, we mostly write code in a so-called “high-level” language, like the Python language you will be learning in this class. Languages like Python are more precise than the pseudocode algorithms we’ve seen so far, but are still relatively easy for humans to read (all things considered, of course!). When the algorithm that we write in Python is run on a computer, there are software and hardware that automatically translate the code into the infamous sequences of 1s and 0s. There are several steps to this translation process, and each one refines the original algorithm into a lower level of abstraction. In other words, without abstraction, modern computers wouldn't work at all, even for the simplest tasks!
To summarize, abstraction allows us to think about performing higher-level, more complex actions without worrying about how they are performed. Abstraction doesn't mean that the lower-level details of how an abstracted algorithm is carried out don’t exist or never have to be written at some point. Abstraction is a mechanism that allows us to ignore details when it is convenient or until they are needed - a mechanism that is central to the entire field of computer science.