It might be interesting to note that during many job interviews, I often pose the very question we're about to explore in this video to candidates and if someone provides the correct answer, they can secure the job position without further questioning.

In this video, we will delve into the crucial topic of choosing the right technology for your digital project.

I will guide you through a method that I've frequently used in my previous projects over the years.

More...

In previous videos, we explored several applications of FPGAs.

However, the challenge lies in the fact that many of these applications can *also *be accomplished using other technologies, like **microcontrollers.**

In this video, I'll present a method to guide you in making informed decisions as you choose the right technology for your project.

## Key Factors to Consider When Choosing the Right Technology for Implementation

When it comes to selecting a solution or the right technology for implementation, there are several factors to consider.

These factors include **cost-effectiveness, time efficiency, available features, and how easy it is to execute the project.**

Now, let's talk about FPGAs.

You already know that FPGAs offer lightning-fast processing capabilities due to their parallel processing abilities.

They are also highly adaptable when it comes to connecting with peripherals, thanks to their numerous pins.

However, despite their numerous advantages, FPGAs have their downsides.

For instance, they tend to be more **expensive** compared to processors.

One of the significant challenges when working with FPGAs is that the design, testing, and debugging processes usually **take longer** than with processors.

Given these factors, it's often advisable to **begin with processors** when choosing a technology for your project, preferably starting with the most cost-effective ones.

If the processors can't handle your project's requirements, only then should you explore more expensive options, such as FPGAs.

Questions You Must Address to Determine the Ideal Technology for Your Project

But how do we truly figure out which technology is suitable for implementing a project?

To tackle this, there are two crucial questions you need to address:

- What is the computational complexity of the target algorithm in terms of
**arithmetic operations?** - What is the
**runtime limit**or**deadline**for executing this algorithm?

### The technology you pick must be able to *handle the amount of computational load you calculated in the first question within the runtime constraint specified in the second question.*

Let's illustrate this with an example to make it easier to understand.

## Example of Choosing the Right Technology: Implementing an Alarm System

Let's say we want to create an alarm system that detects *moving objects.*

Imagine we're monitoring an area with a camera, and **we want an alarm to go off in under 10 milliseconds when something starts moving within that area.**

We have a *Processing Unit* that needs to run a motion detection algorithm on the camera image and trigger the alarm if it detects any movement.

Now, the question is, **what technology should we use to implement this processing part?**

As I mentioned earlier, to choose the right technology, we need to answer two crucial questions.

### How Many Arithmetic Operations Are Required By the Algorithm?

The first fundamental question is: How many arithmetic operations are required by this algorithm?

In other words, we need to calculate the number of addition and multiplication operations needed to run the algorithm *once *on a specific set of input data.

For simplicity, let's assume that the motion detection algorithm in our example requires **5 million addition and multiplication operations.**

So, the answer to the first crucial question is 5 million arithmetic operations for one run of the algorithm on the received images.

### What Is the Runtime Limit for Executing the Algorithm?

Now, let's tackle the second critical question: What is the runtime limit for executing this algorithm?

Answering this question can be a bit tricky and requires careful consideration.

To understand the answer, we need to focus on our specific problem.

Our problem is that when a moving object enters the camera's view, an alarm must go off in less than 10 milliseconds.

We need to figure out *how much time we have available* to run the motion detection algorithm in order to meet the 10-millisecond response time requirement.

At first glance, you might think, "Okay, let's just set the allowable execution time for the algorithm to 10 milliseconds." But here's the catch:

Let's say we receive the first set of images at time zero seconds, and at that moment, there's no moving object in the camera's view.

So, we expect that when the algorithm finishes in 10 milliseconds, no alarm will sound.

Now, suppose a moving object suddenly appears in the camera's view after 3 milliseconds.

The processing unit is still busy working on the previous images, so it doesn't detect the new object. As a result, even after 10 milliseconds, the alarm won't go off.

It's only after another 10 milliseconds, when new images are captured and processed, that we expect the alarm to sound, as it will now detect the moving object in the camera's view.

Now, we need to figure out how much time has passed from the moment the moving object came into view of the camera until the alarm actually went off.

This period is 17 milliseconds, which is notably longer than the 10-millisecond alarm response time specified in the problem.

So, choosing 10 milliseconds as the runtime limit for the algorithm to execute is too slow, and it won't meet the requirements.

Let's try to *speed up* the execution of the algorithm a bit to see if we can solve the problem.

In this new scenario, let's say we allocate 5 milliseconds for one time execution of the algorithm.

This means that the 5 million addition and multiplication operations needed for the motion detection algorithm on the image will take 5 milliseconds to complete.

At time zero, we receive the first batch of images.

Once again, suppose a moving object enters the camera's view after 3 milliseconds.

After waiting for 5 milliseconds, we expect the algorithm's execution to finish, but no alarm goes off because the moving object wasn't in the camera's view when the images were taken.

After 5 milliseconds, a new set of images is captured.

Now, because the moving object is within the camera's view, we expect the alarm to sound after 5 milliseconds.

From the moment the object entered the camera's view to when the alarm went off, it took 7 milliseconds. This is less than the 10-millisecond alarm requirement.

So, it appears that 5 milliseconds is a suitable amount of time for running the algorithm.

In the worst-case scenario, even if the moving object enters the camera's view right after the first set of images, the time between the object being in view and the alarm going off is still less than 10 milliseconds.

Therefore, 5 milliseconds seems to be the optimal and appropriate runtime limit for this algorithm.

Now that we have answers to our two fundamental questions, we can decide on the right technology.

Let's recap the answers to these key questions for our example:

The motion detection algorithm requires **5 million arithmetic operations** to run.

We have only **5 milliseconds to complete this algorithm.**

So, to determine the required processing speed, we divide the 5 million operations by the 5 milliseconds, which gives us 1 *GOPS*, or one billion operations per second.

### Exploring Implementation Options for the Processing Part

Now, let's consider our options for implementing the processing part of this system:

- An AVR microcontroller with a 20MHz clock
- An ARM processor with a 400MHz clock
- and an FPGA.

Starting with cost-effectiveness, we begin by exploring the possibility of using the AVR processor, because this is the cheapest option available.

If it turns out that the AVR processor can't meet our requirements, we move on to the ARM processor. If that also falls short, we'll consider the FPGA.

### Evaluating Available Options: AVR Microcontroller

Now, let's dive into the details of the AVR microcontroller.

The AVR microcontroller has a clock speed of 20 megahertz. Assuming that each addition or multiplication instruction takes only one clock cycle, we can calculate that each instruction requires 50 nanoseconds to execute.

To find out how long it takes to run the entire algorithm once, we multiply the 5 million addition and multiplication operations by 50 nanoseconds.

This results in **250 milliseconds.**

Since 250 milliseconds is much greater than our desired 5 milliseconds, it's clear that the AVR microcontroller isn't a suitable choice for implementing this project.

### Evaluating Available Options: ARM Processor

Now, let's explore the ARM processor as our next option.

Assuming a clock speed of 400 megahertz for this processor and that each instruction takes just one clock cycle to execute, we find that each instruction needs only 2.5 nanoseconds to complete.

So, to calculate the time needed to run the motion detection algorithm once, we multiply the 5 million addition and multiplication operations by 2.5 nanoseconds, which results in **12.5 milliseconds.**

Unfortunately, even with the ARM processor, which has a higher clock speed than the AVR, the time required to complete the algorithm is still much greater than our desired 5 milliseconds.

So, the ARM processor also doesn't meet our requirements for this project.

### Evaluating Available Options: FPGA

Now, it's time to consider FPGA.

With its parallel processing capabilities and the hardware resources it offers, FPGA can easily implement this algorithm and meet our speed requirements.

Additional Factors Influencing the Choice of Technology

One more point to consider when selecting the right technology is that the algorithm itself isn't the only factor to think about.

Other factors, such as **ease of access** and **ease of integration** also play a role.

Imagine you've worked on a previous project where you used FPGA due to specific algorithms.

Now, you want to add more processing modules to this project.

You might realize that you can implement these new modules with a cost-effective processor this time.

However, since you've *already* used an FPGA in your previous project, it may make sense to use an FPGA again this time, even though processors could do the job.

**Cost **and **time** are critical factors in project execution, especially in a professional context.

Suppose you have modules designed with FPGA, and you have their codes readily available.

With the knowledge gained from this program, you understand that you could implement these modules with more affordable technologies like the AVR microcontroller.

However, considering the time and potential costs involved in re-implementing these modules for the AVR microcontroller, it's often more practical and cost-effective to continue using the FPGA for those modules, even if there are more budget-friendly options available.