An algorithm is a self-contained step-by-step set of operations to be performed. Algorithms exist that perform calculation, data processing, and automated reasoning. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.

Are you tired of cycling through the same browser tabs everyday to close the same repetitive tickets? In this webinar JumpCloud will show how you can leverage RESTful APIs to build your own PowerShell modules to kill tickets & tabs using the PowerShell command Invoke-RestMethod.

Looking for a way to avoid searching through large data sets for data that doesn't exist? A Bloom Filter might be what you need. This data structure is a probabilistic filter that allows you to avoid unnecessary searches when you know the data definitely won't be found. Read on to find out more...

You've identified a problem that has been known for thousands of years. Fundamentally it's arrogance and stupidity. Bright young things think they have the answer! But examination shows the problem is far more complex, far more basic, and difficult to solve. If it were easy it would already have been solved! It's easy to cry wolf (and extremely valuable) but the real contribution is to propose a solution. Where's yours? A plea for inclusion in the decision making process doesn't wash. No one invites outsiders into the inner sanctum.

The solution exists in disciplined method. Information systems academic professionals have a short history (30 to 50 years) of extraordinary science from some brilliant minds. We have a full repertoire of how to properly address systems development. Unfortunately it is not applied well. If you were a structural engineer or an electrical engineer, or whatever, you would be well measured in the work you produce. We don't do this, so we allow crap to be unloaded on our consumers. We have the metrics. We don't apply them. That is the answer to our defective development.

Calling this "algorithms" is like calling the car a horseless carriage. It misunderstands the potential, and even as early attempts at the automobile were inept and full of missteps, so too are our early attempts at machine learning and artificial intelligence. It took Google, with all their computers and brainpower, until 2012 before they could identify a cat in a video. But the same technology that can learn what a cat "looks like" can formulate a response, can be used to locate tumors in MRI data, or turn the wheel of a self-driving car. You might be interested in trying out Tensorflow. "Don't worry, you can't break it." http://playground.tensorflow.org/

To understand the way machine learning works you need little more than a basic understanding of the first derivative in differential calculus. Detective and predictive computation is all about minimizing error with gradient descent along multidimensional slopes. As simple as it seems today, we only got workable AI tools in 2016. If the world had looked on the 1886 patent 37435 in 1887 and let its shortcomings shape their vision of its future, we would probably still be riding horses.

AI and machine learning are different from human learning, as is illustrated by this image that was created when we tried to turn the cat-detection algorithms around and get them to generate a picture of a cat. We don't really know how machine learning works in any given instance. It's unpredictable to the human mind because it's about detecting patterns in large amounts* of data, something we cannot do very well, and something that machine learning, suddenly, has come to grasp. I am presently experimenting and writing about this field, and I guarantee it is revolutionary, a black swan of the magnitude of the atom bomb, and perhaps the greatest scientific advancement of our lifetimes. * Google analyzed literally millions of cat videos before their algorithms could even find a cat. And then they drew this monster. Obviously we're only on the threshold.

Iteration:
Iteration is repetition of a process. A student who goes to school repeats the process of going to school everyday until graduation. We go to grocery store at least once or twice a month to buy products. We repeat this process every month. In mathematics, a Fibonacci sequence follows the properties of task repetation as well. Let’s consider the Fibonacci sequence where the first two numbers are 0 and 1, all other numbers are the sum of the previous two numbers.

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89

Iteration Steps:
The Fibonacci formula can be written as, F(n) = F(n - 1) + F(n - 2)

Each time we get a new Fibonacci number (nth number) that nth number is actually (n - 1)th number when we find (n + 1)th Fibonacci as our next nth Fibonacci. As we see the iteration steps mentioned above: if n = 2 then

Prime numbers are natural numbers greater than 1 that have only two divisors (the number itself and 1). By “divisible” we mean dividend % divisor = 0 (% indicates MODULAR. It gives the reminder of a division operation). We’ll follow multiple approaches to find out if a number is prime. Let's find out if P is a prime number.

Approach 1: Divide P by all the numbers from 2 to P - 1. If any of these numbers is divided by P then P is not a prime number. The algorithm below returns false if P is divisible by any number between 2 and P - 1. Otherwise, P is a prime number and returns true.

// This method takes one parameter P// to check if P is primepublic boolean isPrime(int P) { for(int i = 2; i < P; ++i) { if(P % i == 0) return false; } return true; }

Approach 2: Whenever there is a divisor greater than square-root(P), there must be another divisor less than or equal to square-root(P) but NOT greater than square-root(P). For example: Divisors of 20 are: 20 = 1, 2, 4, 5, 10 and 20. For each of these pairs there MUST be least one divisor that is less than or equal to square-root(P).
20 = 1 X 20
20 = 2 X 10
20 = 4 X 5

If we stopped finding divisors when current divisor i is 4, we could still get all the divisors. So, instead of looking for all numbers between 2 and (P - 1) we can do 2 to sqrt(P) which gives a faster solution.

public boolean isPrime(int P) { // i * i <= P is similar to i <= sqrt(p) for(int i = 2; i * i <= P; ++i) { if(P % i == 0) return false; } return true;}

Approach 3: The only prime number that is even is 2. Any other even number cannot be prime because all even numbers are divisible by 2. So, instead of dividing by all the numbers in range 0 to (P - 1) we’ll divide only by odd numbers up to sqrt(P).

If we're looking for efficiency here, a number of optimizations come to mind:

1. Instead of the for-loop condition "i*i <= n", calculate (int)sqrt((double)n), store it in an int called sqrt_n, and then use "i <= sqrt_n" -- that saves O(n) multiplications.

2. The for-loop increment step "j = j + i" is more efficiently written "j += i". In fact, since the initialization and the increment step are the same, the whole loop can be written: "for (j=i; (j+=i) <= n; )"

3. In general, unless there's a good reason, use prefix decrementation ("--i") and post-fix incrementation ("i++").

4. Why bother writing "== true"? If it's a boolean, write just "if (mybool)" rather than "if (mybool == true)"! This saves O(n) subtractions, because comparison is simply subtracting the two values, noting positive/negative/zero, and throwing out the result.

5. I've found it worthwhile, when stepping through an array from end to end, to maintain a pointer and increment it rather than use an integer as a subscript, because that involves O(n) addition operations total. For example the sieving loop in the Eratosthenes method is better written:

for (int i=2, boolean *pp=p+2; i <= sqrt_n /* see #1 above */; i++, pp++) if (*pp) for (Boolean *ppp=pp+i, int j=i; (j+=i) <= n; ) *(ppp += i) = false;

I understand your points. But this article is focusing on algorithms, it is not about code optimization.

Another thing, as you said "Instead of the for-loop condition "i*i <= n", calculate (int)sqrt((double)n), store it in an int called sqrt_n, and then use "i <= sqrt_n" -- that saves O(n) multiplications."

From my understanding, i * i <= n is similar to i <=sqrt(n). Only difference is storing sqrt(n) in a variable which is a code optimization approach and good thing to do (but not the prime focus of this article). I don't understand how does i <= sqrt_n save O(n) multiplication?

Linear search (searching each index in an array one by one) works almost everywhere but it is not optimal in many cases. Let's assume, we have a book which has 42949672960 pages. We also have a table of contents. Now we want to read the content on page 104000. How do we do that? Go to every page one by one? No, linear search would take multiple days to do that. For simplicity, let’s assume there are 18 pages in the book and we want to find out the content of page 16. Then we will go back to how binary search is very fast and efficient for a gigantic search space.

page numbers -> 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 |
At first, we divide the book into two equal parts and find middle value which is (1 + 18) / 2 or 9 (we are taking floor value of 9.5). First part has page 1 to 9, second part has page 10 to 18. We can definitely tell that first part does not have our target page because the highest page which is 9 is less than 16. So, we eliminate that part.

page numbers -> 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 |
Now, we have page 10 to 18 to search. Let’s divide it again, middle value is (10 + 18) / 2 or 14. First part has page 10 to14, second part has page 15 to 18. Again, eliminate the first part as 16 is in the range of 15 to 18. We repeat this procedure until we get a middle value which is equal to our target value 16.

Suppose you use Uber application as a rider and you request a ride to go from one place to another. Your driver just arrived at the parking lot of your place. The only thing you know about the ride is the license plate number. How do you find your Uber ride?

The obvious thing is to go to the parking lot and search all the cars one by one. This method of searching is called linear search, where we search for all possible options (brute force search) until we get our desired result. We follow linear search in our daily life such as while finding a specific book, medicine or movie in stores.

Going back to Uber example, let’s consider parking lot as an array of cars (an array is a series of objects or elements of the same type). If there are 100 cars then the car you are looking for can be any car in the range of first to 100th.

In best case, it will be the first car
In worst case, is will be the 100th car
In average case, it will be (1 + 100) / 2 = 50.5 or 50th car

Here is a coding example for a linear search in Java. For simplicity we can assume that license plate number is represented as integer numbers. The algorithm below describes how to perform a linear search operation to find license number. There is an array that contains all the cars.

First, we find length of the array (total number of cars). For each index (low to high) we'll check if an item in that index is equal to our required number. If it is, then we stop and return that index …

The greatest common divisor (gcd) of two positive integers is their largest common divisor. Let's consider two numbers 12 and 20.

The divisors of 12 are 1, 2, 3, 4, 6, 12
The divisors of 20 are 1, 2, 4, 5, 10 20

The highest number among the common divisors of 12 and 20 is 4. The gcd of 12 and 20 is 4. We'll start finding our answer by taking minimum of 12 and 20 which is 12. We'll return the highest number between 12 and 1 which is divided by 12 and 20. The steps are given below:

Is 12 divided by 12 and 20? No, decrement 12 by 1 which is 11
Is 11 divided by 12 and 20? No, decrement 11 by 1 which is 10
Is 10 divided by 12 and 20? No, decrement 10 by 1 which is 9
......................................................................................................
......................................................................................................
Is 5 divided by 12 and 20? No, decrement 5 by 1 which is 4
Is 4 divided by 12 and 20? Yes, we return 4 as our answer.

The implementation of the above algorithm is -

public int gcd(int a, int b) { for(int div = minimum(a, b); div >= 1; --div) { if(a % div == 0 && b % div == 0) { return div; } }}

One of Google's most recent algorithm changes affecting local searches is entitled "The Pigeon Update." This update has dramatically enhanced search inquires for the keyword "Yelp." Google searches with the word "Yelp" included will now yield Yelp at the top of the page, instead of Google's official pages.

What Is The Pigeon Update?
The word Pigeon is derived from the bird who is known to carry letters and information to the target specified. There couldn't be a better name for this Google update because Google has made it clear that they want users to find what they are looking for. Pigeon has helped local search directories become more noticeable in the search results. The directories are no longer underneath Google+ websites.

Fixes Yelp Issues
Directory websites like Yelp were created to assist users in finding a local restaurant or business that they'd like to do business with. The problem they had with Google is that Google+ content was being displayed above the current Yelp site when a user specifically searched for Yelp in their search.

For example, if someone had searched for the specific terms "McDonald's Yelp," a Google+ website would arrive on top of Yelp's website for McDonald's. With the Pigeon Update, this is entirely fixed. Yelp's website now arrives on top of Google's when Yelp is used in the search inquiry.

The increased visibility for Yelp doesn't stop there. If there are sub-directories within that same Yelp page for the …

Thanks for the compliments. I understand now why my teachers were always so hard on me when it came to my writing now. I guess they just want to see me do good. Being picky is fine, as long as there is some limit to it. I've had clients reject me like 5 times after I basically re-wrote the entire article for them and did it exactly how they wanted. I have learned a lot since my freelance writing career began. It would be cool if I could get some work through Experts-Exchange like writing tech articles for people. Is there anything like that here? Thanks.

How often have we come across situations where we need to know if two strings are 'similar' but not necessarily the same? I have, plenty of times. Until recently, I thought any functionality like that would be impossible to get my head around. Fortunately, there was already a vast field of well developed research to come to my rescue!

Whoa, hang on! I'm a beginner here. What kind of field is this?

This measure of 'similarity' between strings is known as 'Approximate String Matching'. The basic steps followed in comparing any two strings are as followed:

a) Analyze the two strings
b) Compute a 'metric' for these two strings
c) Check if the value of the metric is above or below a certain threshold.
d) Decide if the strings 'match'.

Relax. The above 4 steps will become a little clearer the moment I elaborate on one of these metrics.

But wait. What metrics are these?

A metric is a quantity of measurement to put it simply. Here we need to choose an appropriate metric to represent how similar or different two strings are.

Oh okay. So if we're measuring the same thing, why do we need different metrics?

A person's weight can be measured in kilograms or pounds. Your height can be in feet or inches. Similarly, there are different approaches to measure how similar two strings are.

I spotted a typo
>>We can now understand what the longest common subsequence searches for. If we compare "PENNY","ENTITY" and "EPIPHANY", we will observe that the longest common substring is "ENY".

This algorithm (in C#) will resize any image down to a given size while maintaining the original aspect ratio. The maximum width and max height are both optional but if neither are given, the original image is returned.

This example is designed to be inserted into the Page_Load event of an aspx page and the image is returned as the Response.OutputStream.
To use this:

1

Create a webpage called ShowImage.aspx and put the code below into the Page_Load event

2

Ensure you include a reference to System.Drawing.Imaging (eg. using System.Drawing.Imaging;)

3

Create another webpage calling it whatever you wish

4

Insert an Image with the source set to ShowImage.aspx

protected void Page_Load(object sender, EventArgs e){// Read in the width and heightint maxHeight, maxWidth;string h = Request.QueryString["h"];string w = Request.QueryString["w"];if (h == null || h == "") maxHeight = Int32.MaxValue;else maxHeight = Int32.Parse(h);if (w == null || w == "") maxWidth= Int32.MaxValue;else maxWidth = Int32.Parse(w);string imageUrl = Request.QueryString["img"];System.Drawing.Image fullSizeImg = System.Drawing.Image.FromFile(Server.MapPath(imageUrl));// Do we need to create a thumbnail?Response.ContentType = "image/gif";if (fullSizeImg.Height > maxHeight || fullSizeImg.Width > maxWidth){ //resize stuff int newWidth, newHeight; if (fullSizeImg.Height >= maxHeight && fullSizeImg.Width < maxWidth) { newHeight =

An algorithm is a self-contained step-by-step set of operations to be performed. Algorithms exist that perform calculation, data processing, and automated reasoning. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.