I have a simple math algo. All it does is it takes an input and finds i,j such that i^2 + j^2 = input with the restriction that j >= i (so that it doesn't print it's counterpart e.g., 2^2 + 3^2 == 3^2 + 2^2 but I only need the latter as j >= i)
For my code, I did the following: I have 2 for loops, first loop for i and second loop for j. Takes both i and j values and test if i^2 + j^2 == input and if j >= i. if yes, print it and update count.
The problem is, with large sums of values, it takes a very long time as it loops twice from 1 to 2000 and then 1 to 2000 again.
def some_mathfn(n):
count = 0
for i in range(1,n+1):
for j in range(1,n+1):
if(i**2 + j**2 == n and j >= i):
g = print(i, '^2 + ', j,'^2')
count += 1
return count
some_mathfn(2001)
You've got an O(n2) algorithm for no obvious reason. It's easy to make this O(n1/2)...
n/2
(for variable i
) - because when i
is greater than sqrt(n/2)
then i*i + j*j
will be greater than n
for any j
greater than i
.
n
, because i
j
The last two steps are effectively just checking that the square root of n - i*i
is actually an integer, but in some cases (for very large values of n) finding the nearest integer and then checking the condition could be a more reliable approach, in order to avoid floating point limitations causing issues, where the nearest-representable double to the theoretical result could be an integer, despite that actual result not being an integer. This would only happen for really large values of n, but...
See more on this question at Stackoverflow