[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x

Python

Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in other languages. Python supports multiple programming paradigms, including object-oriented, imperative and functional programming or procedural styles. It features a dynamic type system and automatic memory management and has a large and comprehensive set of standard libraries, including NumPy, SciPy, Django, PyQuery, and PyLibrary.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hi,

how to solve this error. ps help. Tks.

line 68, in <module>
    volume_id.Apppend(result['volume']['id'])
AttributeError: 'list' object has no attribute 'Apppend'


i need more than one system-volumes but created on volume only in the cloud platform. Ps advice. Tks


start_number = 1
number_of_volumes = 10
volume_id = []

for i in range(number_of_volumes):
    vol_no = start_number + i
    create_volume_url = "https://domain.com/v2/cd088007d3b84e7fa894478e6fe667c4/volumes"
    body = {"volume":
        {
            "size": 60,
            "availability_zone": "az0.dc0",
            "volume_type": "ssd",
            "name": "vol" + str(vol_no),
            "multiattach": False,
            "imageRef": "c54d05fa-5ad8-425e-be56-e60ede395230"
        }
    }
    headers = {
        'content-type': "application/json",
        'X-Auth-Token': token
    }

    result = requests.post(create_volume_url, json=body, headers=headers, verify=False)
    volume_id.apppend(result['volume']['id'])
    print(result.json())
0
Rowby Goren Makes an Impact on Screen and Online
LVL 12
Rowby Goren Makes an Impact on Screen and Online

Learn about longtime user Rowby Goren and his great contributions to the site. We explore his method for posing questions that are likely to yield a solution, and take a look at how his career transformed from a Hollywood writer to a website entrepreneur.

How to read particular text (like empno,name,salary,etx) from scanned or image file and store in db using python with any one deep learning library (like tensorflow, pytorch,etc)

pls. provide some sample code and which one we can use for this concepts
0
How to read particular text (like empno,name,salary,etx) from scanned or image file and store in db using python with any one deep learning library (like tensorflow, pytorch,etc)

pls. provide some sample code and which one we can use for this concepts
0
Hi,

Can help to improve the python script to provision 10 VMs  but how automate to store the system volume id like on a csv.file and allow the script to point the system volumes to avoid manual work.

I am learning for the first time for getting the POSTMAN to Get connected, POST to obtain a  token, POST to create a system volume and POST to provision one VM only but need to copy the system volume id, network id, flavour id.

Then, use python script to obtain a token, create system volume but i am stuck for help.


# 1. Obtain token

# Store the urls we want to use
url = "https://silan_IP/silvan/apigateway/v1.0/"
get_apis = "apis_include_throttles"

get_token_url = "https://iam-apigateway-proxy.domain.com/v3/auth/tokens"

body = {
    "auth": {
        "identity": {
            "methods": [
                "password"
            ],
            "password": {
                "user": {
                    "domain": {
                        "name": "SITC_NCC"
                    },
                    "name": "admin",
                    "password": "P@ssw0rd"
                }
            }
        },
        "scope": {
            "project": {
                "id": "cd088007d3b84e7fa894478e6fe667c4",
                "domain": {
                    "name": "SITC_NCC"
                }
            }
        }
    }
}


# 2 Create System Volume

results = requests.post(get_token_url, json=body, verify=False)

token = …
0
How to update freezed python code?
A normal python program can just open itself as a text file and write to it. And the changes will be there next time.
Is this possible with freezed python code or not?
I tried searching on the internet and didn't find anything relevant.
0
hi,

why can't delete my system volumes with error. checked has no more VM, snaps, etc.

use postman to delete with error. ps check my postman.
2.jpg
use python script to delete with error. ps check my python code

import requests

get_token_url = "https://iam-apigateway-proxy.domain.com/v3/auth/tokens"

body = {
    "auth": {
        "identity": {
            "methods": [
                "password"
            ],
            "password": {
                "user": {
                    "domain": {
                        "name": "XXXXX"
                    },
                    "name": "XXXXX",
                    "password": "XXXXX"
                }
            }
        },
        "scope": {
            "project": {
                "id": "cd088007d3b84e7fa894478e6fe667c4",
                "domain": {
                    "name": "XXXXX"
                }
            }
        }
    }
}

# POST to the API
results = requests.post(get_token_url, json=body, verify=False)


token = results.headers['X-Subject-Token']

#Deletion

volume_id = [
    'c3b803ee-f1ab-428d-bc55-01b380c36d49',
    '065ba0b3-959a-4dc0-af23-17b8a1099367',
    'b282ea13-5e24-48fb-b7e2-84fce0973621',
    'aa87fc71-ae1b-47ee-82fc-bcd73a59538d',
    'a72cc934-473e-4cde-8eb0-743ca058e350'

]

delete_url = "https://evs.domain.com/v2/cd088007d3b84e7fa894478e6fe667c4/volumes/"
headers = {
    'content-type': "application/json",
    …
0
Dear Experts

I am new to Python and MySQL and as part of my training I am creating database listing all my files accross all my NAS. I have table TPaths and TFiles, TPaths contains obviously PathName and TFiles containes FileNames. PathName is foreignkey for TFiles which links Path and FileName together.

This is clear.

My question is maybe very easy - When I am walking accross files and folders, for each file found I need to check whether its path is already listed in TPaths table. What is quickest way to do it? Should I index field Pathname? Should I search it like full text? Should I search it with clause LIKE?

Many thanks for your answer

Vladimir
0
Traceback (most recent call last):
  File "C:\Python27zzz\projects\gangmember\bottle.py", line 997, in _handle
    out = route.call(**args)
  File "C:\Python27zzz\projects\gangmember\bottle.py", line 2000, in wrapper
    rv = callback(*a, **ka)
  File "gangmember.py", line 57, in new_member
    c.execute("INSERT INTO gangmember (lastname, firstname, birthdate, deathdate, address, city, state, zipcode, father, mother, status, gangname) VALUES (?,?,?,?,?,?,?,?,?,?,?,?)")
ProgrammingError: Incorrect number of bindings supplied. The current statement uses 12, and there are 0 supplied.
0
Hi,

I am working through the process of automating my backups in AWS. I have a process to take VSS snapshots of my volumes, and I also have a separate script in AWS Lambda which can automatically take an AMI. What i'm now looking to do is combine these in to one single function. The lambda script does take a snapshot, however I'm not certain that it's a VSS2 snapshot. I've googled the issue but all the articles I've come across seem to describe the two processes as separate entities.

This is the Python script I'm using in Lambda to take an AMI:

# Automated AMI Backups
#
# @author Robert Kozora <bobby@kozora.me>
#
# This script will search for all instances having a tag with "Backup" or "backup"
# on it. As soon as we have the instances list, we loop through each instance
# and create an AMI of it. Also, it will look for a "Retention" tag key which
# will be used as a retention policy number in days. If there is no tag with
# that name, it will use a 7 days default value for each AMI.
#
# After creating the AMI it creates a "DeleteOn" tag on the AMI indicating when
# it will be deleted using the Retention value and another Lambda function

import boto3
import collections
import datetime
import sys
import pprint

ec = boto3.client('ec2')
#image = ec.Image('id')

def lambda_handler(event, context):
   
    reservations = ec.describe_instances(
        Filters=[
            {'Name': 'tag-key', 'Values': ['backup', 'Backup']},
        ]…
0
Please, I have these codes in python 2.7 which great in downloading files from the server. How could extend them to add upload using a socket, threading with the ability to list files in server side and user should enter hardcoded username and password before it granted to enter and list files?

the requirements are simple but I'm still beginner ..

Server Code

 
  import socket
    import threading
    import os

    def RetrFile(name, sock):
    filename = sock.recv(1024)
    if os.path.isfile(filename):
        sock.send("EXISTS " + str(os.path.getsize(filename)))
        userResponse = sock.recv(1024)
        if userResponse[:2] == 'OK':
            with open(filename, 'rb') as f:
                bytesToSend = f.read(1024)
                sock.send(bytesToSend)
                while bytesToSend != "":
                    bytesToSend = f.read(1024)
                    sock.send(bytesToSend)
    else:
        sock.send("ERR ")

    sock.close()

def Main():
    host = '127.0.0.1'
    port = 5000


    s = socket.socket()
    s.bind((host,port))

    s.listen(5)

    print "Server Started."
    while True:
        c, addr = s.accept()
        print "client connedted ip:<" + str(addr) + ">"
        t = threading.Thread(target=RetrFile, args=("RetrThread", c))
        t.start()

    s.close()

if __name__ == '__main__':
    Main()

Open in new window

Client code

import socket

def Main():
    host = '127.0.0.1'
    port = 5000

    s = socket.socket()
    s.connect((host, port))

    filename = raw_input("Filename? -> ")
    if filename != 'q':
        s.send(filename)
        data = s.recv(1024)
        if data[:6] == 'EXISTS':
            filesize = long(data[6:])
            message = raw_input("File exists, " + str(filesize) +"Bytes, download? (Y/N)? -> ")
            if message == 'Y':
                s.send("OK")
                f = open('new_'+filename, 'wb')
                data = s.recv(1024)
                totalRecv = len(data)
                f.write(data)
                while totalRecv < filesize:
                    data = s.recv(1024)
                    totalRecv += len(data)
                    f.write(data)
                    print "{0:.2f}".format((totalRecv/float(filesize))*100)+ "% Done"
                print "Download Complete!"
                f.close()
        else:
            print "File Does Not Exist!"

    s.close()

Open in new window

0
Exploring ASP.NET Core: Fundamentals
LVL 12
Exploring ASP.NET Core: Fundamentals

Learn to build web apps and services, IoT apps, and mobile backends by covering the fundamentals of ASP.NET Core and  exploring the core foundations for app libraries.

i am learning to use Python / pycharm / PIP to provision 1000 VM through a script to call the  API but i don't know how to start doing it.
 
What is the proper way to install python with IDE, Pycharm and PIP for Windows 10?
Where to obtain the python template code to provision 1000 VMs for example for me to modify the flavours, image, types, servers, network?
How to put the digital token on the script and where & how to execute the script if is connected thr VPN client from my PC?
How to generate a private key from the python script as well?


I managed to use web POSTMAN from google chrome to test calling the API to provision some VM as stated in below.

1. Connected to Private Cloud Fusion Cloud 6.3 from Web POSTMAN

2. Obtained a digital token
POST => https://iam-apigateway-proxy.domain.com/v3/auth/tokens

{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"name": "XXX"
},
"name": "XXX",
"password": "XXX"
}
}
},
"scope": {
"project": {
"id": "XXX",
"domain": {
"name": "XXX"

3. Provisioned System Volume
POST => https://evs.domain.com/v2/cd088007d3b84e7fa894478e6fe667c4/volumes

{ "volume":
{
"size": 60,
"availability_zone": "az0.dc0",
"volume_type": "ssd",
"name":"volume1",
"multiattach": false,
"imageRef":"ca9384bd-5d78-49ce-a4c3-6d90d77c623c"
}
}

4. Provisioned VM  
POST => https://ecs.domain.com/v2/cd088007d3b84e7fa894478e6fe667c4/servers
{
"server": {
"flavorRef": …
0
Hello there.

I am trying to make server pagination work for:
http://bootstrap-table.wenzhixin.net.cn/documentation/

My backend is Python Flask and I have this function preparing the data.
It works but I cannot understand how to paginate it...
The parameters that I receive from fronteng get call are the ones you see:
@app.route('/json/items')
def get_items():
    """
    Return a JSON containing all items
    """
    order = request.args.get('order')
    offset = request.args.get('offset')
    limit = request.args.get('limit')
    items = db.session.query(Item).all()
    return jsonify([i.serialize for i in items])

Thanks for you help and patience.
0
Hi,

anyone can advice how to write a python script template to create 2000 VMs (can state different flavour, image, vpc, sys volumes) to provision on private cloud to call the API of  Huawei Fusion Cloud 6.3 or AWS.

The python script template can either shutdown, restart, destroy, recreate and backup the VM, VPC, System Volumes, etc when the staging DC configuration need to redo.

Tks.
0
I want to extract the product description from a 10-k report for my master thesis (new at programming, finance background). This product description is between "ITEM 1" and "ITEM 2" from the reports. What I did until now is to download all the 10-ks in .txt form, remove html tags and make all text uppercase. My problem is now when I try to select the text I need and save it into another directory. I tried doing the selection on my own, but with unsatisfactory results. Currently, I am using a code made by a guy "iammrhelo" on GitHub. His code is for selecting "ITEM 7" to "ITEM 8". With a bit of tweaking, made it search for what I need. Link to his code: https://github.com/iammrhelo/edgar-10k-mda
The code is able to idetify this:
the code is able to identify thisBut the code is not able to identify this:
the code is not able to identify thisMy problem is now that the parsing he does not work for all 10-ks. To give a little context, I need to find the right syntax that the code has to look for. The words that is looking for are in the list item1_begins variable. The code I am using to select the text, is the following:
import argparse
import codecs
import os
import time
import re

from pathos.pools import ProcessPool
from pathos.helpers import cpu_count

class MDAParser(object):
    def __init__(self):
        pass

    def extract(self, txt_dir, mda_dir, parsing_log):
        self.txt_dir = txt_dir
        if not os.path.exists(txt_dir):
            os.makedirs(txt_dir)

        self.mda_dir

Open in new window

0
Hi, i would like to compare any 2 text files. These files may be some what bigger in size.  Would like to do using spark dataframes. Sample requirement shared below.

Ideally, take first record from file1 and search in entire file2 and it should bring all matched occurrences and export to output file by putting proper flag as i mentioned (like update/delete/insert/same) by using Pyspark. Similarly all other records from file1 also should follow same approach.

DataSet1 - (file1.txt)
NO  DEPT NAME   SAL
1   IT  RAM     1000    
2   IT  SRI     600
3   HR  GOPI    1500    
5   HW  MAHI    700

DataSet2 - (file2.txt)
NO  DEPT NAME   SAL
1   IT   RAM    1000    
2   IT   SRI    900
4   MT   SUMP   1200    
5   HW   MAHI   700

Output Dataset - (outputfile.txt)
NO  DEPT NAME   SAL FLAG
1   IT  RAM     1000    S
2   IT  SRI     900     U
4   MT  SUMP    1200    I
5   HW  MAHI    700     S
3   HR  GOPI    1500    D

Thanks
0
Hi

I have nstalled PyCharm 2018 and am creating some python code with the first line as :

import numpy as np

and apparently this doesnt seem to end well. I get an error . Why is this and how to fix this? I have Anaconda installed on the machine as well so would expect Python is installed as well.

  File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 20, in do_import
    module = self._system_import(name, *args, **kwargs)
ModuleNotFoundError: No module named 'numpy'
0
Hi All,

I need a python boto3 script to scan all my EC2 instances and write EC2 hostnames if its got unencrypted EBS volumn into a text file.

Please help!

Thanks
0
Problems communicating to modbus devices via usb to rs485.

so I am using some python 3.5 and am using the PyModbus, PySerial

and my code is based on the PyModbus serial example and under windows it works fine and communicates to the modbus device correctlyt

when I run my code on the Raspberry Pi3 running Ubuntu mate 16.04.5 LTS, the device is not responding from my code.
I first thought the adapter was not installed correctly but after double checking everything,
I installed gtkterm and configured the port to /dev/ttyUSB0 9600 8-n-2  and tried sending the hex data to the modbus and still no reply
I then noticed under flow control it has a RS485-HalfDuplex(RTS) setting and under the advanced options if I set the send delay to 20 milliseconds and 10 ms RTS off

now when I try and send the hex modus packet the device responds as expected.

I connected my Oscilloscope and from the python code it is transmitting some small packet of data randomly looking more like after a time out when the data is sent.
after digging around a bit it seems that some USB to rs485 devices need the RTS to toggle the device to transmit mode and then RTS off to put it back into receive mode.

I also discovered that pySerial has an RS485 settings and have tried these and then the packets being transmitted was more regular and larger before but still much shorter than the gtkterm, and much messing around with the timing  helped a bit,  I have now gone back to using the original code but…
0
Dear experts,

I am preparing for Financial Engineering (FE). I am currently learning C++. I intend to learn C++ thoroughly.

I also plan to learn Python

I have a few questions:
1. Is it mandatory to have expert level knowledge for Financial Engineering?
2. Is Machine learning and IOT (internet related) a prerequisite for FE?
3. Is there a book which covers the topics which are a pre-requisite  for FE?

Kindly guide.

Thank you
0
OWASP: Forgery and Phishing
LVL 12
OWASP: Forgery and Phishing

Learn the techniques to avoid forgery and phishing attacks and the types of attacks an application or network may face.

I'm getting this error:
Exception has occurred: UnicodeDecodeError
'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte

Using this code:
json_obj = urllib.request.urlopen(url).read() 

response = urllib.request.urlopen(url).read()

json_obj = str(response, 'utf-8')

data = json.loads(json_obj)

Open in new window

0
Hello

This is Azizah Alqahtani

I just sent you email like this bellow and you response to give you the python code ..


Could you please help me to solve this problem,

I have to convert a python code to a Java code

Or rewrite the code with the same idea ?


The code is about one type of substitution cipher ?

If you can , I will send you the assignment page and my friend’s answer in python..


Regards..
Assignment1Python.zip
0
Hello,

I'm working on a python script to generate SQL CREATE TABLE statements from a list of delimited files.  
I often receive data in delimited format with each field surrounded by double-quotes to prevent issues importing when someone has typed the delimiter in a text field.

Here's a simple example of this:

ID|Text1|Text2|Year
"123"|"more data"|"Here is a pipe symbol | typed by an end user"|"2014"

In this example, the end user has typed a pipe symbol into a text field.  The double quotes are intended to prevent this row from resulting in 5 columns.  

I'm using the Python csv Sniffer module to deduce a "dialect"; specifically what delimiter is used and whether fields are quoted (text-qualified) as above.

Sniffer is correctly identifying the pipe delimiter but I think it's not giving the correct value for "doublequote".  
Here's what sniffer is returning for a dialect:
['delimiter', 'doublequote', 'escapechar', 'lineterminator', 'quotechar', 'quoting', 'skipinitialspace']
['|', False, None, '\r\n', '"', 0, False]

It is getting the delimiter, the line terminator, and I think the quotechar is correct.  
I'm thinking 'doublequote' should be True.  

Any input on this will be greatly appreciated.  

TIA!
0
I'm trying to connect to an API, but am getting a 401 error that reads, "Access denied due to missing subscription key." How do I modify my script so that the API registers the key that I'm sending as a header?

headers = {'Ocp-Apim-Subscription-Key': 'KeyGibberish', 'Authorization': 'AuthorizationKeyGibberish', 'Accept':'application/json'}
r = requests.get("https://api.website.com/reporting/v1/users")
print(r.content)
0
Good morning,

I've been playing with integrating Python into our data processing workflow.  We are a MS SQL shop (2008-2012) and it's served us well for years and I think it will remain the core of our workflow but I would like to start inyegrating Python and specifically Jupyter Lab and Jupyter Notebooks due to the ability to integrate code, documentation, and visualization of output in a single "document" format.

The other reason I'm considering integrating python into our process is because, for me, the native tools provided for importing and exporting data to/from SQL haven't been intuitive for me and so I'm looking to python as an alternative.  I know the native tools in SQL Server are incredibly powerful and could do everything I need and more - they just seem cumbersome to me and so I'm looking for an alternative.

Man - get to the point already right?  :)

The main goal is to use python connecting through pyodbc to create a database on a SQL Server (2008/2012) instance, read in a series of delimited text files (e.g., pipe | or comma separated, sometimes text quoted sometimes not) and import the files into tables on the SQL Server instance and perform any required transformations like date conversions or numeric coding of text values etc (executing SQL from Jupyter notebook cells).  

I've got the pyodbc connection down and I've managed to execute commands to create databases and tables.  What I'm looking for are code "templates" that others may have used …
0
I want to read a file, and look for line A, and then look for line B, and then look for line C. Currently I have something like:

with open(infile, 'r') as inf:
    for line in inf:
        #do something with each line until we come to line A
        if line == A:
            #we found line A. Keep reading lines until we come to line B
            #do something with each line until we come to line B
            if line == B:
                #we found line B. Keep reading lines until we come to line C
                #do something with each line until we come to line C
                if line == C:
                    #Hooray!

Open in new window

Now I want to convert this to something nicer, like a state machine:
def initial_state()
    read lines from inf
        do something with each line until we come to line A
        if line == A:
            found_line_A()
            
def found_line_A()
    read lines from inf
        do something with each line until we come to line B
        if line == B:
            found_line_B()
            
def found_line_B()
    read lines from inf
        do something with each line until we come to line C
        if line == C:
            #Hooray!

with open(infile, 'r') as inf:
    initial_state()            

Open in new window

How can I read lines from the same input file inf in each of the three functions above?
0

Python

Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in other languages. Python supports multiple programming paradigms, including object-oriented, imperative and functional programming or procedural styles. It features a dynamic type system and automatic memory management and has a large and comprehensive set of standard libraries, including NumPy, SciPy, Django, PyQuery, and PyLibrary.