Go Premium for a chance to win a PS4. Enter to Win



An algorithm is a self-contained step-by-step set of operations to be performed. Algorithms exist that perform calculation, data processing, and automated reasoning. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.

Share tech news, updates, or what's on your mind.

Sign up to Post

Dear Experts,

Actually trying to analyze the machine learning logics behind in algorithms, for example the KNeighborsClassifier():

import pandas
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier()

Open in new window

Could you please advise would it be possible to list the content of an algorithm in Python and with which syntax concretely? The above print(model) of course just provides some overall information about the algorithm, but not the content

Thanks in advance,
[Webinar] Cloud Security
LVL 11
[Webinar] Cloud Security

In this webinar you will learn:

-Why existing firewall and DMZ architectures are not suited for securing cloud applications
-How to make your enterprise “Cloud Ready”, and fix your aging DMZ architecture
-How to transform your enterprise and become a Cloud Enabler

A question about making puzzles for bridges (also known as Hashiwokakero or Hashi).  https://en.wikipedia.org/wiki/Hashiwokakero

Is there an algorithm to generate a puzzle given the size of grid?  Please provide algorithm or link to one if it exists.
Issue :SSL Certificate Signed Using Weak Hashing Algorithm      

An SSL certificate in the certificate chain has been signed using a
weak hash algorithm.      The remote service uses an SSL certificate chain that has been signed
using a cryptographically weak hashing algorithm (e.g. MD2, MD4, MD5,
or SHA1). These signature algorithms are known to be vulnerable to
collision attacks. An attacker can exploit this to generate another
certificate with the same digital signature, allowing an attacker to
masquerade as the affected service.

Note that this plugin reports all SSL certificate chains signed with
SHA-1 that expire after January 1, 2017 as vulnerable. This is in
accordance with Google's gradual sunsetting of the SHA-1 cryptographic
hash algorithm.

Note that certificates in the chain that are contained in the Nessus
CA database (known_CA.inc) have been ignored.      

Contact the Certificate Authority to have the certificate reissued.      


Plugin output:The following certificates were part of the certificate chain sent by
the remote host, but contain hashes that are considered to be weak.

|-Subject             : CN=XXX
|-Signature Algorithm : SHA-1 With RSA Encryption
|-Valid From          : Sep 30 12:06:43 2016 GMT
|-Valid To            : Sep 28 12:06:43 2026 GMT
The Original Data Sheet in the attached spreadsheet shows x, y coordinates and the corresponding true z values.
The Interpolation Sheet shows a different set of x, y coordinates (some overlapping), and the interpolated values imported from two different algorithms - bilinear and bicubic interpolation. There is also a column called z_truth.

My goal is to determine whether bilinear or bicubic interpolation is better.
It may be that for this initial data set that one sometimes does better than the other and vice versa.

To get a sense of how the interpolated data matched against the original data, I started copying rows from the Original Data Sheet to the Interpolation Sheet placing x, y coordinates from one near the other. If the coordinates match exactly, then there is only one copy. Otherwise, there are two copies where the original x, y coordinate falls between two x,y coordinates in interpolated sheet.

Currently, I am just going one row at a time from the Original Data Sheet and copying it to one or two locations in the Interpolation Sheet. This is taking too much time. I was wondering whether this copying could be automated.

If you could also help me with some metric to identify whether bilinear or bicubic interpolation is closer to the true values, I would appreciate that.

(I am now aware that my choice of original true value coordinates may not be so great; but this is a (good?) first start.

I have here what I think is a classical O.R. (operations research) problem. I'm looking to formulate it mathematically and look at the options available to solve it.

We have a list of say 100 recipes.

These recipes combined use 400 ingredients.

The quantity of ingredients required for all the recipes could be defined by a matrix dimension 100 x 400.

We can combine recipes into meal plans, so that the ingredients required for a meal plan is the total of the ingredients required for each recipe in the meal plan.

Each ingredient has an associated cost, that varies depending on the quantity bought.

Say the cost variations can be expressed with no more than 5 cost buckets per ingredient. Eg.

Bucket, Cost / kg
< 1kg, £4.00
1-5kg, £3.50
5-20kg, £3.00
20-100kg, £2.80
> 100kg, £2.50

The ingredient costs can be expressed in two matrices each of dimension 400 x 5:
a) One matrix 400 x 5 giving the buckets for each ingredient
b) Another matrix 400 x 5 giving the unit costs per ingredient and bucket number (1 to 5)

A given meal plan will have a given ingredient requirement, with associated total cost determined via the above matrices.

Suppose we constrain the number of meals in the meal plan to say 4 meals, and these 4 meals must be different recipes. For simplicity (!) we have no other constraints for now.

Our objective is to choose our meal plan (4 meals) such that the total cost is minimised.

1. How do we formulate this problem?
Watching this 6 minute video, I leaned HAARP shoots 72,000 times the maximum amount of energy for an AM station in the United States.


My question is what's a better analogy?

For example, how much energy is that compared to the output of a small nuclear plant?

I am just having a hard time sensing the scope of the comparison they created and hope someone can come up with a better description.

I have heard that Blockchain Database are secure because of their use of "Byzantine fault tolerance."

I am told, Blockchain algorithms use encryption techniques to intertwine new data with existing data using this type of cryptography.

Please verify this and explain in more detail where the term "Byzantine fault tolerance" comes from and what it is exactly?

I would like to know how to compute the x-intercepts of a cubic graph based in a given equation. If there is a formula that does this, then i would like to know this formula, if not then i would like to know the steps required to compute said values.
What options are there to protect a web service from a DOS attack?

IF the web service were accessed only by my Objective-C iPhone application, and nowhere else, is this web service protected by the "security through obscurity" model? Or, can hackers crack open the source code of the iPhone app, like Apple can?

What about if I put the URL to the web service into the SQLite database and encrypted the Path?

So, when my app needs to request information from the web service, it does a DB lookup in the SQLite database for the path to the web service. When it gets it, it decrypts it. Then, using a variable (in memory) only, it makes the web service call.

Does this protect from a DOS attack to that web service call?

Are there easier ways?

Will this work on Java for the Android?

What about on my website?

What technologies are best suited for the highest performance web services to handle the heaviest loads? Thousands (even millions) of transactions per second?

I know that Node.js can handle thousand(s) sessions on a single thread. And I do not know of any other technology which handle more than one session per thread. Am I right about that? Are there others that can do this also?

What about real-time programming as it relates to web-services. Is there such a thing?

And how about a few general words about hardware deployment? Having a central API call which distributes calls to hundreds of more specialized web servers? What about Caching for successive related calls?

And what about Machine Learning? Can algorithms be optimized by existing Machine Learning algorithms that reduce the average response times of the most heavily load web service?

Free Tool: SSL Checker
LVL 11
Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

What should I expect by forking the BitCoin open source code? What kinds of things can one do in customizing the code?

What language(s) is it in? What platform does it run on?

What about the code run by Bookkeepers? And the generation of the BlockChain database updates?

And lastly, what "engine" determines which algorithms need to be mined and what is managing the connections to this peer-to-peer network?

X = 1   2
       3   4

y = 3   1
       4   2
how can i convert x to y????
I need to create a language agnostic hashing algorithm for a custom PHP-based application that I'm developing.  

The reason it needs to be language agnostic is because the PHP application that I'm developing needs to be able to communicate with another 3rd party application (hosted on Heroku) .. and both applications need to be able to apply the same exact algorithm.

I understand that its considered bad practice to use a hardcoded SALT value when applying hashing algorithms, but in this particular scenario, I suspect that it may be unavoidable.  I'm all ears, however.  Still .. let me explain what it is that I need to do first.

Here's what I currently have set up in my PHP application:

$param1= 'beta';
$param2= 'noodle';
$param3= '1502719494';
$hashstring = $param1. $param2. '-' . $param3;  
$options = [
  'cost' => 10,
  'salt' => 'Dk2jdfPIJFddf32948jdfg809fiejf',
echo password_hash($hashstring, PASSWORD_BCRYPT, $options);

Open in new window

What I'd like to do is to somehow re-write this in a way where it could be interpreted universally in pretty much any programming language, .. but where they can both use the same hardcoded SALT value, and both return the exact same result.  The Heroku application is apparently a Node.js powered application (written in Google V8 JavaScript) .. if that information helps any.  

Anyways, .. I'd be interested to hear anyone's thoughts regarding what I'm trying to accomplish here.

- Yvan
I have the Excel file below with 3 columns.
On one row I have a pair of q, k corresponding to a certain x.
In the sheet are different values for x.
I need to find what pairs of q, k are common for all values of x.
How do you do that fast and easy?
What is the best and easiest approach, method, software to solve a system like the one below?


Could you help to find a set of solutions? I guess are more than one.
This is not homework, but rather one of that engineering approximation problem that ended up in a set of unknown variables.

Basically the function is.
with p, q, k, r, s unknown.
And next known set of solutions with 1 digit approximation allowed for S5x:
x - S5x
1 - 0
2 - 1
4 - 2
8 - 3
16 - 6
32 - 13
64 - 25
128 - 50
256 - 101
512 - 201
1024 - 401
2048 - 799
4096 - 1567
8192 - 2896
16384 - 4096

I have an audio file, many actually, that are an interview between the interviewer and interviewee.  The same person is asking questions in each file, while the people answering are different.

I need to separate the answers out by generating silence over the interview questions. I'm currently doing this by hand with audacity, but it is extremely time consuming.

Any help would be greatly appreciated.  I am a software developer, but audio is not my area, so code is am option if there isn't a program available.

If I am comparing my algorithm to another algorithm. My algorithm has a complexity f O(n) whereas the other algorithm is O(n2).

I then measured the running time by implementing the algorithms.

Should the running time of the other algorithm be the square of the running time of my algorithm?

For example of my algorithm running time is 3 ms should the other algorithm running time be 9?
I got a requirement from my client that he wishes to encrypt/ password protect some of the documents, before sending over to another user in another regional office.

By not writing the password into the email body.

How can the user in another regional office knows the password to decrypt/ open the protected documents?

What approach should be adopted? Is there any standard/ algorithm I can refer to? But we prefer not to use any 3rd party software/ utility.

Thank you.
the array has series of numbers Need to find all pairs in the array using a hash table
integers ...
Free Tool: Port Scanner
LVL 11
Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

The problem: I want to display a row of rectangles placed shoulder to shoulder, on a grid page (think graph paper). The width of the page where the rectangles appear will be an unknown integer variable (but always greater than 100).  The height of the rectangles will be a constant integer value (e.g., 20 units high). I want to fill a row on this page with as many rectangles as possible where the rectangle width will be equal or greater than the predefined rectangle height and where there will be no gap/margin at the end of the row.

I realize that this is only possible when the page width (e.g. 200)  can be evenly divided by the rectangle height (e.g., 200 / 20).  If the page width is 201, it is impossible to have rectangles all with the same width and at the same time have no gap/margin at the end of the row of rectangles. But if the page width is 210, I can fit 10 rectangles which are 21 units wide. That's fine. I don't mind stretching the rectangles a bit, as long as the rectangle width is never less than the predefined rectangle height constant (in this example, is 20). So, if the page width is 220, that works fine for I can get 10 rectangles of 22 units across the page with no gap/margin at the end of the row. I don't mind the rectangle width growing a bit so as to ensure no right margin/gap results. But there would be a limit to the rectangle width changing from its original desired width (which is equal to the rectangle height constant).  So I would need the …
I have an array that will have approx 3000 elements. Within each element is an array that contains two key:value pairs. I want to sort the elements of the outer array based on the value of the first key:value pairs within each element.

array = [{"key1":"group9","key2":["20170222","20170531"]},{"key1":"group3","key2":["20170221"]},{"key1":"group7","key2":["20170321"]}, ..................]

so sorting gives e.g. (the elements are arranged based on sorting the groupX value of key key1 i.e. the first key:value pair)

array = [{"key1":"group3","key2":["20170221"]} ,{"key1":"group7","key2":["20170321"]} ,{"key1":"group9","key2":["20170222","20170531"]},................................]

I've implemented a merge sort and it works fine. But I want to look at what is the most efficient algorithm to use, as my colleagues have suggested bubble sort (I thought this was one of the worst algorithms, but it really depends on how large the dataset is).

Can anyone advise?

I'll (learn to) implement this on a webpage so I'm using Javascript/JQuery/ReactJS
I know that a circle has 360 degrees and Pi has a very specific an known value.

The question is, must a circle had 360 degrees? Or was that a decision that was made so each quarter would have 90 degrees?

So, was Pi "backed into"?

If Euclid had wanted a circle to be 400 degrees, could he have simply used a different formula to generate Pi?


And what is that formula to generate Pi, if you have it.

I need some help with this code. It works for some Arrays but not for all. Do you see a mistake in the code?

These are the failures of 13 different tests:
  There were 3 failures:
  1) test3(ads.set2.knapsack.test.EfficientKnapsackTest)
  java.lang.AssertionError: Checking optimal profit:  expected:<3144> but was:<2660>
  ?at ads.set2.knapsack.test.EfficientKnapsackTest.performTests(EfficientKnapsackTest.java:157)
  ?at ads.set2.knapsack.test.EfficientKnapsackTest.test3(EfficientKnapsackTest.java:69)
  2) test4(ads.set2.knapsack.test.EfficientKnapsackTest)
  java.lang.AssertionError: Checking optimal profit:  expected:<76> but was:<72>
  ?at ads.set2.knapsack.test.EfficientKnapsackTest.performTests(EfficientKnapsackTest.java:157)
  ?at ads.set2.knapsack.test.EfficientKnapsackTest.test4(EfficientKnapsackTest.java:74)
  3) test8(ads.set2.knapsack.test.EfficientKnapsackTest)
  org.junit.runners.model.TestTimedOutException: test timed out after 300 milliseconds

  Tests run: 13,  Failures: 3

public class KnapsackSolver {

	 * Calculates the maximum profit for the knapsack problem using 
	 * an improved dynamic programming algorithm
	 * @param items
	 *            the items available to be packed into the knapsack.
	 * @param capacity
	 *            the maximum weight allowed for the knapsack.
	 * @return the maximum profit possible for the given weight
	public static int pack(final Item[] items, final int 

Open in new window

I have used Python, Natural Language Processing and web scraping technology to do some deep mining of Google's vast warehouses of articles on the web.

Is what I described considered artificial intelligence?

Or, in order to satisfy the requirement of AI must that model also automatically adjust it's algorithms based on the results of the searches? In my case, the adjustments were manual.


I have a trading system that is about 40% right. That means 4/10 signals generated by it yield in profit and the remaining 6 result in loss. Can I use neural networks/AI/Deep-learning to improve the chances? Can neural networks be used, in this case, to filter-out the bad trades in advance such that the trading system produces 60% good trades?


An algorithm is a self-contained step-by-step set of operations to be performed. Algorithms exist that perform calculation, data processing, and automated reasoning. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.