Fourth week as a software developer intern
This week I learned about a really interesting concept called Pretotyping, which is based on the law of failure where new ideas fail (even if they are well executed). That’s why you should be sure that you’re building the thing before build it right. You must try a lot of new ideas and learn to fail fast.
Ideas are worthless if you don’t work on them. So you should stop looking for ideas, and start looking for innovations.
Pretotyping is defined as: “Validating the market appeal and actual usage of a potential new product by simulating its core experience with the smallest possible investment of time and money”
The more time and money you invest into a project, the harder it’ll be to say goodbye. So with solid data, we will be able to know if there is a chance for our projects to work.

This week we have a project where we have to implement pretotyping to know if our products are worthy of further development. When I was learning about electronics, I always had in mind to create an app capable of predicting the value of resistors requiring only an image. I knew what tools I required to make it happen, but I haven’t had the chance to make it real. Using the pretotyping technique, I was able to test this idea by simulating the behavior of my application. Using WhatsApp, students sent me an image of the resistor and I sent it back the resistance value. Then I asked a couple of questions related to their feeling about the app. In the end, after the feedback from the students, we conclude that this app would be impractical, mainly because learning the color code is not that complicated but I still think that it could be helpful as a learning tool, so probably we will pivot the idea.

Again, we introduce more into the quantum mechanics world, applied in the new revolution of computing. Since the beginning of humanity, we have been always in a close relationship with information processing, the sound that evolves into languages, and the writing are examples. But not only that also the way that we reproduce ourselves, joining to parts of information that result in a newborn with characteristics of the progenitors. Even the universe, since the big bang, every particle holds information of this event (like its actual position).
As we explained in the past entry of this blog, the advantages of quantum computing over the traditional one, is that it can work in a parallel manner, due to the properties of the Qubit, which consists in a world of probabilities (instead of a binary option).
With the recent hype of machine learning, we can see that one of the biggest limitations is how to deal with big data. Right now we have access to almost limitless data, but we are unable to process it correctly due to computing power limitations. That’s when quantum computers should come into play because one of the main objectives in this area is to take classical information and compress it into a quantum state.

Another property that I learned and want to apply is to consider myself as a black box, being auto critic. It’s easy to blame others for my failure, I should stop trying to justify my failures and starting to create a growth mindset, to achieve this mindset, I can divide the problems into smalls chunks and get better in each of these aspects.
Richard Feynman was a really interesting person, I had the opportunity to watch some videos about his life and got so interested that I spend a bit more time reading a bit more of him. He was a great explainer because he was able to simplify abstract concepts into a down to Earth explanation. One example of this is his famous Feynman Diagrams that oversimply the complexity of the formulas of the time.

He’s also known as one of the precursors of parallel computing, dividing a problem into independent parts that allow him to work in this task simultaneously, solving a problem (expected to be done in 9 months) in just 3 months!. Also saying that simulating nature through computing is impractical because there are lots of phenomena that should be in consideration.
Another gift of him is that he explained the scientific method, which says that to create a new law, we should start with a guess, then compute the consequences of this guess, and finally compare the results of our experiments with nature, if we obtain something that we didn’t expect, then we have to repeat this process all over again.
Stephen Wolfram is another proficient figure in recent years, he’s a mathematician who got caught by the computing world. One of the many interesting ideas that he has is that even the most simple system, can generate the most complex system.
He has a lot of different ideas that even might be the basis of future researches, one of these initiatives is the democratize programming, make it more accessible through using instructions that resembles more a natural human language, and unify different fields of science like chemistry, physics, economy, biology, etc.
A more ambitious task is to calculate and predict the behavior of the universe through a computer, running simulations until finding the formula that rules our universe.

Another interesting topic was related to testing and automation. Here I learned how big companies like Google and Netflix had developed a lot of tools, that are intended to give a better understanding and preparation in case of failures in their systems.
Google uses a technique called continuous integration. This system provides real-time information when you have a build break, this is super important because you want to know what is failing a soon as possible, giving the developers the tools to test the code. This continuous integration also allows keeping the build running, summit changes with confidence, and with faster iteration times.
There is something called flaky tests, these tests are the ones that failed by external factors, not even related to the code, for example, environmental ones. This makes it difficult to find what is breaking the build and has big consequences like wasting the developer’s time.

To deal with these problems, they gave us an example of how the google chrome team in ios works, since they have a lot of limitations to release software in the Apple ecosystem due to the politics of security and development. That’s why the Google team should operate in a very efficient manner. They adopted the Git system and added little milestones every time. They used automated testing which consists of the use of bots that can simulate the human interaction with the app, returning fast feedback. This was because, when you rely on manual testing, your cycles can be limited by how fast your manual testers can return this feedback.
The tests that they usually use, consist of unit testing and then end to end testing, later a performance test that compares the performance per cycle, and finally, a screenshot test that tests the correct display of the UI.
You don’t want to be distracted by bug reports in the last release, don’t want to waste time, one example is that if the build has been broken by days, the developers usually just ignore that problem. Google uses the next steps to evaluate their bugs, how to take action, and learned from their error.
- Do we need a human to solve this problem?
- if so, find the right assignee
- to explain the problem and likely causes
- how quickly it was resolved?
Now, I will talk about how Netflix implemented one of the most popular testing methodologies. All started with the migration of Netflix to the cloud infrastructure provided by AWS, they needed to be sure that a loss of an Amazon instance wouldn’t affect Netflix. They created Chaos Monkey, it went so well that they added additional failure injection options, allowing them to test a more complete suite of failure states called Simian Army.
The traditional tests such as unit tests or integration are designed to avoid failure, but Netflix took one step further and embrace failure, they designed a testing methodology called simian army, which introduced a bit of chaos. They simulated failures, such as unavailable zones, region outages, degraded methods, etc. This then evolved into what is called Chaos Engineering, next are some of the key points of this methodology:
- Failures have become much harder to predict, these failures cause outages to the companies, and these outages can cost millions to a company.
- Chaos engineering lets you compare what you think will happen to what happens in your systems. You “break things on purpose” to learn how to build more resilient systems.
- “The best defense against major unexpected failures is to fail often. in this way, we force our services to be more resilient”. This evolved into Failure injection testing, same concepts as simian army, but gave developers more granular control
Causing failure in production, by seeing how it’s going to behave and by validating the assumptions you’re saving a lot of troubles down the line.

Something fragile does not handle changes well and the opposite would be robust, but in reality, it just means that you’re indifferent to change
- “Vaccinate yourself with failure now so that you are immune to it in the future”
It’s not about the system, it’s about building the culture, one organization that can benefit from the change and failures, to get better instead of simply staying the same.
Another interesting methodology is to predict the code coverage. By identifying real-time code usage patterns, focusing on testing over this code frequently executed will reveal the paths with low test coverage. This also will allow us to identify the dead code that can be removed.
I learned so much this week, I think I’ll be less naive when I have another idea of one million. The world of quantum mechanics and their present and future applications don’t cease to amaze me, it was so interesting to watch how big companies found creative solutions due to the limitations of their releases and how careful they need to be, thanks for reading until here, see you next time!