[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x

Python

Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in other languages. Python supports multiple programming paradigms, including object-oriented, imperative and functional programming or procedural styles. It features a dynamic type system and automatic memory management and has a large and comprehensive set of standard libraries, including NumPy, SciPy, Django, PyQuery, and PyLibrary.

Share tech news, updates, or what's on your mind.

Sign up to Post

Here is a quick entry class problem in python. Please take a look.
When a car is rented out, I thought it only delete the item in the self.availableCars,
Why the item was also deleted in the self.OriginalInventory?

for example, we first rent a "Sedan" out and return it back, the price of "Sedan" is shown as $None after we return it.

please help.

Thanks,
RDB



class carRental:
    def __init__(self, listOfCars):
        self.OriginalInventory = listOfCars #make a copy of Original Inventory, so we can look up price when a car is returned.
        self.availableCars = listOfCars

    def displayCars(self):
        print()
        print("Here is a list of available cars for rent")
        print("**************************")
        for car in self.availableCars.keys():
            print("{}:${} per day".format(car,self.availableCars.get(car)))
        print("**************************")
        print()

    def rentOut(self, requestedCar):
        if requestedCar in self.availableCars.keys():
            print("You have requested {}, it is available. It will cost you ${} per day".format(requestedCar, self.availableCars.get(requestedCar)))
            print()
            del self.availableCars[requestedCar]

            print("here is the list of available cars")
            for car in self.availableCars.keys():
                print("{}:${} per day".format(car,self.availableCars.get(car)))

            print("here is the list of car in the original 

Open in new window

0
Learn Ruby Fundamentals
LVL 12
Learn Ruby Fundamentals

This course will introduce you to Ruby, as well as teach you about classes, methods, variables, data structures, loops, enumerable methods, and finishing touches.

I would like to know of it's possible to get notified by Active Directory when a user has been added/removed from a group?

Is there a way in python to "subscribe" to have our program notified when this kind of event occur?

The only way i could detect those changes is to pool data periodically to see group changes.

(running python 3.7 on windows server 2012 r2)

yet, i've found pyad and python-ldap modules, but none of them seems to have functions for that.

Note: If it's not possible to be advised, then is there a way to just check with a single call "Is there any changes?"... and if there's some changes, i could browse a complete group to see these changes.

Thanks
0
hi Apple folks

i prefer using Xcode for Python programming but i notice it has no native support (or full-featured) for debugging Python using Xcode. or, probably i missed something with Xcode?

basically, how can i have full-featured debug functions for Python code using Xcode?

thanks,
bbao
0
Using:
  • Microsoft Visual Code 1.28.2 x64
  • Python 3.7

I've the smallest code ever
import ldap
print (ldap.__version__)

Open in new window


And i'm getting the error:
  • Exception has occured: ModuleNotFoundError
  • "No module named 'ldap'

I've installed the Wheel found on Christoph Gohlke web site.
using this command: pip install .\python_ldap-3.1.0-cp37-cp37m-win32.whl

I've checked the dependencies as shown here:
Capture.PNG
What am i missing?

Because when i run the following command in terminal, i'm getting "3.1.0" which mean that it can find the ldap module.
python -c "import ldap;print (ldap.__version__)"

Thanks a lot for your help
0
I need a script written to take a large xml file and extract information into a csv or directly to an MS SQL Database.  Preferably in powershell.

Attached is the a 'sample' of the XML file.  The actual file size is ~800 mb.  I have a Python script that plods through it on smaller file sizes (~14mb), but will take an amazing amount of time for the large file.  Really even a script to preprocess the file and take out all the line items with ServicePointChannelID with the 400 range would be nice, then my python script would work good.

Attached is a sample xml file, a sample output file, and the python script that creates it.

Ideally it would be nice to go over to powershell with this from Python.
test2.xml
dataexport.csv
dataexport.py
0
Hi,
I'm trying to create a pandas DataFrame from some json, which has a series of arrays. I've tried the code below, but I get an empty DataFrame.
The expected output is:

id               name
_________________
2546558   A1
2156478   A2
3654785   A3
1236489   A4
7896324   B1
1597532   B2
9512347   B3
7536972   B4

import pandas as pd
import json

data = [  
   [  
      {  
         'id':2546558,
         'name':'A1',
      },
      {  
         'id':2156478,
         'name':'A2',
      },
      {  
         'id':3654785,
         'name':'A3',
      },
      {  
         'id':1236489,
         'name':'A4',
      }
   ]
    ,
   [  
      {  
         'id':7896324,
         'name':'B1',
      },
      {  
         'id':1597532,
         'name':'B2',
      },
      {  
         'id':9512347,
         'name':'B3',
      },
      {  
         'id':7536972,
         'name':'B4',
      }
   ]
]
   
val = json.loads(json.dumps(data))

val = pd.DataFrame(val)

def getreadings(dict):
    d = pd.DataFrame()
    d['id'] = dict['id']
    d['name'] = dict['name']
    return d

df = pd.DataFrame()
for i in range(len(val)):
    df1 = getreadings(val.iloc[i,1])
    df = df.append(df1)
    
print(df)

Open in new window

0
Hi
We talked a little about my approach to teaching neighborhood kids and adults, even, how to code.
I think we decided against going all out on Python right away. Do you still agree? I'd like to hold off on Python for a pure newbie.
K-12 focuses on Java, so I think Java for them is obvious.
I also think so, because polymorphism and class design will be important for making tablet, phone apps, as they are seriously non-novice projects. - Java will make 4 an easy transition
I'm going to start them off on Java and when they are blatantly ready, switch to Python, or their preferred device.
I could do Java and Python side-by-side?
Any suggestions?
Thanks
I'll start early next year

My padowan coders will excel
0
My computer operating system is Windows 7 pro .  

I am trying unsuccessfully to find code in Github.  I need help using the search feature.  Yes, I have searched Experts-Exchange first before posting this message.  I guess I just don’t know what key words to enter.

I have to write a program to switch last name, first name to first name, last name.  I am not a coding expert.  I was hoping to find some vbscript code which would perform the task.  I have to deal with all kinds of names.  In the data I have to convert  there are names with 3 elements like:

Joe Jones Hopkins

Then there are people who have just one name.  
And there are people with suffixes like Jr and III .  Example Joe Jones jr.  etc.

Please explain in detail how you would use Github search to find suitable code to do the above.  I would prefer vbscript but Python code would be suitable also.

Or may be there is an Experts-Exchange article which would help.

Thank you for your help.
0
i would look like ro analyse/describe data that is big over 300,000,000 rows do you python can be able to help me do this. how efficiently can i do thos
1
Hi,
I have some json data that I'm converting to a pandas DataFrame. I'd like to make the current index column the 'code' column and reset the index, so the output would be:

      code      parameters                       units
1      1              Temperature               C
2      10              Specific Conductivity      NaN
3      11              Resistivity                      NaN
4      113              NaN                              ppm
5      114       NaN                              ppt
6      117       NaN                              mg/L


This is my example python code:

import pandas as pd
import json
import requests

data = {  
   'parameters':{  
      '22':'NO₃⁻',
      '23':'NH₄⁺',
      '24':'Cl⁻',
      '25':'Turbidity',
      '26':'Battery Voltage',
      '49':'Velocity',
      '28':'Flow Rate',
      '29':'Total Flow (volume)',
      'flowVelocity':'Flow Velocity',
      '50':'Chl-a Concentration',
      '51':'Chl-a Fluorescence',
      '30':'Partial Pressure O₂',
      '31':'Total Suspended Solids',
      '54':'BGA-PC Concentration',
      '32':'External Voltage',
      '10':'Specific Conductivity',
      '55':'BGA-PC Fluorescence',
      '33':'Battery Level',
      '11':'Resistivity',
      '34':'RWT Concentration',
      '12':'Salinity',
      '35':'RWT Fluorescence',
      '13':'Total Dissolved Solids',
      '58':'BGA-PE Concentration',
      '36':'Cl⁻ mV',
      '14':'Density',
      '59':'BGA-PE Fluorescence',
      'density':'Density',
      '37':'NO₃⁻ as N',
      '16':'Baro',
      '38':'NO₃⁻ mV',
      '39':'NH₄⁺ 

Open in new window

0
Why Diversity in Tech Matters
LVL 12
Why Diversity in Tech Matters

Kesha Williams, certified professional and software developer, explores the imbalance of diversity in the world of technology -- especially when it comes to hiring women. She showcases ways she's making a difference through the Colors of STEM program.

Dear Experts

I am new to Python and MySQL and as part of my training I am creating database listing all my files accross all my NAS. I have table TPaths and TFiles, TPaths contains obviously PathName and TFiles containes FileNames. PathName is foreignkey for TFiles which links Path and FileName together.

This is clear.

My question is maybe very easy - When I am walking accross files and folders, for each file found I need to check whether its path is already listed in TPaths table. What is quickest way to do it? Should I index field Pathname? Should I search it like full text? Should I search it with clause LIKE?

Many thanks for your answer

Vladimir
0
Hi, I have a python script that is creating a DataFrame from some json data.
I can create a DataFrame (df) from the data, but I need to create a DataFrame from the 'readings' column within the df DataFrame. My code is failing because the 'readings' column is a list.
Ultimately I need to create a DataFrame with the two DataFrames combined:

DataFrame needed
This is the python code I'm working with:

import pandas as pd
import json

data = {  
   'locationId':123546987,
   'parameters':[  
      {  
         'parameterId':'11',
         'unitId':'81',
         'customParameter':False,
         'readings':[  
            {  
               'timestamp':1538957700,
               'value':2306.078
            },
            {  
               'timestamp':1538959500,
               'value':2305.892
            },
            {  
               'timestamp':1538961300,
               'value':2305.981
            }
         ]
      },
      {  
         'parameterId':'1',
         'unitId':'1',
         'customParameter':False,
         'readings':[  
            {  
               'timestamp':1538957700,
               'value':25.575
            },
            {  
               'timestamp':1538959500,
               'value':25.572
            },
            {  
               'timestamp':1538961300,
               'value':25.575
            }
         ]
      }
   ]
}
         
val = json.loads(json.dumps(data))

val1 = val['parameters']

#val2 = [{'timestamp': 

Open in new window

0
Greetings Everyone
I am very new and green at python scripting and need some guidance.
I have a script that reads temperature data from 3 different sensors.
It is set to currently print that data to the terminal screen.
The script works OK, but the issue I have is the data gets printed to the screen with an overabundance of decimal places. I have come across a function to round that data to 1 or 2 decimal places, which is what I want.
Unfortunately I can't seem to get the syntax correct and the function fails to work as I need.

Here are the print commands currently:

# Get Temperature and print to screen
while True:
    print ("Pool Temp = ", read_temp(temp_sensor_1))
    print("Air Temp = ", read_temp(temp_sensor_2))
    print("Box Temp = ", read_temp(temp_sensor_3))
    time.sleep(2)   # Read every 2 seconds

Open in new window


That returns the following output:

Pool Temp = 51.0166
Air Temp = 51.355
Box Temp = 64.557

Open in new window


Waits 2 seconds and repeats until I kill the script.

What I am looking for is something like this:

Pool Temp = 51.1°F
Air Temp = 51.2°F
Box Temp = 64..6°F

Open in new window


I found a reference to using the str.format() syntax but have not been able to make it work correctly.
Any help would be greatly appreciated.

:)

-Bob
0
Hi,

Using Python I'm trying to read a json string and output it to a dataframe, but I get the following error:

TypeError: 'float' object is not subscriptable

code is here:
# We import the requests module which allows us to make the API call
import pandas as pd
import ast
import json
import requests

# Call API to pull data
url = 'https://samples.openweathermap.org/data/2.5/weather?q=London,uk&appid=b6907d289e10d714a6e88b30761fae22'

response = requests.get(url = url)
response_data = response.json()

#data = response_data

data = """{  
   'coord':{  
      'lon':-0.13,
      'lat':51.51
   },
   'weather':[  
      {  
         'id':300,
         'main':'Drizzle',
         'description':'light intensity drizzle',
         'icon':'09d'
      }
   ],
   'base':'stations',
   'main':{  
      'temp':280.32,
      'pressure':1012,
      'humidity':81,
      'temp_min':279.15,
      'temp_max':281.15
   },
   'visibility':10000,
   'wind':{  
      'speed':4.1,
      'deg':80
   },
   'clouds':{  
      'all':90
   },
   'dt':1485789600,
   'sys':{  
      'type':1,
      'id':5091,
      'message':0.0103,
      'country':'GB',
      'sunrise':1485762037,
      'sunset':1485794875
   },
   'id':2643743,
   'name':'London',
   'cod':200
}"""

val = ast.literal_eval(data)
val1 = json.loads(json.dumps(val))
val2 = val1['main']['temp'][0]
dataset = pd.DataFrame(val2)
#OutputDataSet=dataset
#print(val1)
print(dataset)

Open in new window

0
Hi,

I'm currently working in AWS and trying to use a Lambda function to automate the creation of my AMIs. I'm doing this via the use of the Python script below, but when I test it it returns an error. Can anyone shed any light on what I should be looking at please?

Script:

import boto3
import collections
import datetime
import sys
import pprint

ec = boto3.client('ec2')
#image = ec.Image('id')

def lambda_handler(event, context):
   
    reservations = ec.describe_instances(
        Filters=[
            {'Name': 'tag-key', 'Values': ['backup', 'Backup']},
        ]
    ).get(
        'Reservations', []
    )

    instances = sum(
        [
            [i for i in r['Instances']]
            for r in reservations
        ], [])

    print "Found %d instances that need backing up" % len(instances)

    to_tag = collections.defaultdict(list)

for instance in instances:
    try:
        retention_days = [
            int(t.get('Value')) for t in instance['Tags']
            if t['Key'] == 'Retention'][0]
    except IndexError:
        retention_days = 7

    finally:

        #for dev in instance['BlockDeviceMappings']:
        #    if dev.get('Ebs', None) is None:
        #        continue
        #    vol_id = dev['Ebs']['VolumeId']
        #    print "Found EBS volume %s on instance %s" % (
        #        vol_id, instance['InstanceId'])

            #snap = ec.create_snapshot(
            #    VolumeId=vol_id,
      …
0
Extracting API Data Using Python and Loading into SQL Server

Hi,
I am new to Python in SQL Server. I'd like to load json data from an API into SQL Server, I thought the best way to do this is to utilise the new SQL Server Machine Learning Services with Python.

I can call the API and print the json data in SSMS:

execute sp_execute_external_script 
@language = N'Python',
@script = N'

# We import the requests module which allows us to make the API call
import pandas as pd
import json
import requests
 
# Call API to pull data
url = ''https://samples.openweathermap.org/data/2.5/weather?q=London,uk&appid=b6907d289e10d714a6e88b30761fae22''

response = requests.get(url = url)
response_data = response.json()

print(response_data)
'

Open in new window


I'm pretty happy using the JSON functions in SQL Server to format and parse the data into SQL tables, but with Python how do I read/access the json data from the response into an TSQL query?

Thank you
0
Hi Experts

I am new to Python and now playing with Python and MySQL. I am trying to feed MySQL with list of all files on my NAS. All is working OK when I am using os.walk to get files on NAS and inserting filenames into MySQL one by one.

But I believe that it would be much quicker if I am able to insert FileNames into MySQL in batches, e.g. 1000 filenames in single step.

What I would like to know how to achieve this?

Many thanks for your kind help

Vladimir
0
Dear Experts

starting to play with Python, I am checking functions for files and folder manipulations.

I wrote short code to walk through file/folder tree and calculate md for each file.

I got error howerev - file not found - it looks like os.walk function is able to retrieve even files with long path, followin function os.stat is not able to handle it.

Am I right, it this my problem? And if it is, is there any workaround?

Many thanks

Vladimir

Here is Python code:
import hashlib,os,sys,pymysql
folder = "//synologymaly/dropbox/######Backup/#soft/#sort/cse/Computer, Technology and Engineering eBooks/books8/Game Programming/ProgrammingGameAI/AIBinaries/"
print(folder)
md5_hash = hashlib.md5()

#Init connection to mySQL server
conn=pymysql.connect(host='tower.local', user="root",password="root_password",database="Files")
cur=conn.cursor()
conn.commit()

#cur.execute("delete from TFiles where true")
#conn.commit()

TotalSize=0
TotalFileNumber=0

for root, dirs, files in os.walk(folder, topdown=False):
	for name in files:
		print (os.path.join(root,name))
		statinfo = os.stat(os.path.join(root, name))
		filename = os.path.join(root, name)
		md5_hash = hashlib.md5()
		with open(filename,"rb") as f:
			#Read and update hash in chunks of 4K
			for byte_block in iter(lambda: f.read(4096),b""):
				md5_hash.update(byte_block)
			sql="insert into TFiles (filename,filesize,md5) values (%s,%s,%s)"
			#print(os.path.join(root,name))
			#print(statinfo.st_size)
		

Open in new window

0
Hi

I have nstalled PyCharm 2018 and am creating some python code with the first line as :

import numpy as np

and apparently this doesnt seem to end well. I get an error . Why is this and how to fix this? I have Anaconda installed on the machine as well so would expect Python is installed as well.

  File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 20, in do_import
    module = self._system_import(name, *args, **kwargs)
ModuleNotFoundError: No module named 'numpy'
0
OWASP: Threats Fundamentals
LVL 12
OWASP: Threats Fundamentals

Learn the top ten threats that are present in modern web-application development and how to protect your business from them.

hi

  I needed to convert the following stub operating on a python list , and make it suitable for a numpy array, but I can not seem to get the index function going for the numpy array


My method is called as follows: mymethod([1,2,3,4,5,6,7,8,9,10],3)   where b=3

  i=mylist.index(b)
0
Hi EEE

I am looking to look up the data and target of mydata which is a panda frame. I seem to get errors as follows:

import pandas as pd


mydata = pd.read_csv('D:\Learning\Languages\ML\Python_Data1.csv')

print(mydata.keys())  #runs ok
print(mydata.target)  # AttributeError: 'DataFrame' object has no attribute 'target'
print(mydata.data)   #AttributeError: 'DataFrame' object has no attribute 'data'
print(mydata.DESCR) # AttributeError: 'DataFrame' object has no attribute 'DESCR'
0
Hi Experts,

Subprocess.call fails in my python code.

I get the following error when I run the code. the error is as follows:

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/harvest-territory-stories# python3 run.py process
2018-09-26 14:53:11,262 INFO before main ####
2018-09-26 14:53:11,304 INFO inside main
2018-09-26 14:53:11,306 INFO parse received
2018-09-26 14:53:11,306 INFO add_argument
2018-09-26 14:53:11,306 INFO args Namespace(method='process')

2018-09-26 14:53:11,306 INFO before switch

2018-09-26 14:53:11,559 INFO there are 269825 items to process
/usr/local/lib/python3.6/dist-packages/wand/image.py:2758: CoderWarning: Unknown field with tag 42036 (0xa434) encountered. `TIFFReadCustomDirectory' @ warning/tiff.c/TIFFWarnings/881
  self.raise_exception()
/usr/local/lib/python3.6/dist-packages/wand/image.py:2758: CoderWarning: Unknown field with tag 42037 (0xa435) encountered. `TIFFReadCustomDirectory' @ warning/tiff.c/TIFFWarnings/881
  self.raise_exception()
2018-09-26 14:53:24,938 ERROR Uncaught exception
Traceback (most recent call last):
  File "run.py", line 318, in <module>
    main()
  File "run.py", line 306, in main
    thumbnails()
  File "run.py", line 223, in thumbnails
    create_thumbnails_from_database(destination, thumbnail, cookies, prefix)
  File "/home/ubuntu/workarea/sourcecode/harvest-territory-stories/harvest/extract.py", line 292, in create_thumbnails_from_database
    _handle_pdf(os.path.join(parent, entry.id), 

Open in new window

0
Hello,

I'm working on a python script to generate SQL CREATE TABLE statements from a list of delimited files.  
I often receive data in delimited format with each field surrounded by double-quotes to prevent issues importing when someone has typed the delimiter in a text field.

Here's a simple example of this:

ID|Text1|Text2|Year
"123"|"more data"|"Here is a pipe symbol | typed by an end user"|"2014"

In this example, the end user has typed a pipe symbol into a text field.  The double quotes are intended to prevent this row from resulting in 5 columns.  

I'm using the Python csv Sniffer module to deduce a "dialect"; specifically what delimiter is used and whether fields are quoted (text-qualified) as above.

Sniffer is correctly identifying the pipe delimiter but I think it's not giving the correct value for "doublequote".  
Here's what sniffer is returning for a dialect:
['delimiter', 'doublequote', 'escapechar', 'lineterminator', 'quotechar', 'quoting', 'skipinitialspace']
['|', False, None, '\r\n', '"', 0, False]

It is getting the delimiter, the line terminator, and I think the quotechar is correct.  
I'm thinking 'doublequote' should be True.  

Any input on this will be greatly appreciated.  

TIA!
0
I'm new to Python, trying to teach myself the language.  I'm using Best Buy's API to test. I'm not sure how to format the JSON data so it's readable in HTML/CSS format. Can you assist?

import requests

url = "https://api.bestbuy.com/v1/products%28customerReviewAverage%3E4%7CshippingWeight%3C50%29"

querystring = {"show":"upc,salePrice,customerReviewAverage,shippingWeight","apiKey":"XXXXXXXXXXXXXX","format":"json"}

response = requests.request("GET", url, params=querystring)

print(response.text)

Open in new window


Here is how the JSON response looks.

{"from":1,"to":10,"currentPage":1,"total":190579,"totalPages":19058,"queryTime":"0.203","totalTime":"0.230","partial":false,"canonicalUrl":"/v1/products(customerReviewAverage>4|shippingWeight<50)?show=upc,salePrice,customerReviewAverage,shippingWeight&format=json&apiKey=XXXXXXXXXXXX","products":[{"upc":"760514017023","salePrice":442.99,"customerReviewAverage":5.00,"shippingWeight":11.00},{"upc":"877929006358","salePrice":489.98,"customerReviewAverage":4.80,"shippingWeight":30.80},{"upc":"664254218934","salePrice":320.99,"customerReviewAverage":5.00,"shippingWeight":11.00},{"upc":"612934533280","salePrice":249.99,"customerReviewAverage":5.00,"shippingWeight":7.30},{"upc":"664254011221","salePrice":171.99,"customerReviewAverage":5.00,"shippingWeight":11.00},{"upc":"068888879262","salePrice":132.99,"customerReviewAverage":4.80,"shippingWeight":7.85},{"upc":"612934536229","salePrice":1062.99,"customerReviewAverage":4.70,"shippingWeight":28.50},{"upc":"093207100253","salePrice":499.98,"customerReviewAverage":4.40,"shippingWeight":35.89},{"upc":"865334000122","salePrice":202.99,"customerReviewAverage":4.20,"shippingWeight":9.99},{"upc":"865334000115","salePrice":237.99,"customerReviewAverage":4.30,"shippingWeight":9.99}]}

Open in new window

0
Hi,

I have python installed on my windows 7 machine. is there any way i can go back to the previous command i typed on python, normally on powershell or cmd you press the up button.

regards,
kay
1

Python

Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in other languages. Python supports multiple programming paradigms, including object-oriented, imperative and functional programming or procedural styles. It features a dynamic type system and automatic memory management and has a large and comprehensive set of standard libraries, including NumPy, SciPy, Django, PyQuery, and PyLibrary.