Big Data

Big data describes data sets that are so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, data curation, search, sharing, storage, transfer, visualization, querying and information privacy. The term often refers simply to the use of predictive analytics or certain other advanced methods to extract value from data, and seldom to a particular size of data set.

Share tech news, updates, or what's on your mind.

Sign up to Post

My business is exploring the option of recoding item codes as currently its all over the place. Ideally going forward we would like to have only one serial number generated for an item  and that the serial number will be the same as the item number.

Is it possible and what impact it will have to the business.

Thanks
0
NFR key for Veeam Backup for Microsoft Office 365
LVL 1
NFR key for Veeam Backup for Microsoft Office 365

Veeam is happy to provide a free NFR license (for 1 year, up to 10 users). This license allows for the non‑production use of Veeam Backup for Microsoft Office 365 in your home lab without any feature limitations.

Hello,

I am new to Hadoop.  I have a question regarding yarn memory allocation.  If  we have 16GB memory in cluster,  we can have least 3 4GB cluster an keep 4 GB for other uses.  If a job needs 10 GB RAM, would it use 3 containers or  use one container and will start using the ram rest of the RAM ?
0
Hello Guys,

We would like to keep Hadoop prod , dev and QA with standard settings and configurations should sync.   What is the best practise to keep them same?  Since we have 100+ data nodes in PROD and only 8 nodes in Dev and 8 Nodes in QA.

We need to make sure all of them are in sync. What is best practise to keep them same?
0
dear all,
I have got video and audio files I need to segment them based on their text.
I need to segment all the files. for example ( a single word contain n audio frames and n of visual frames (images) )
Can any one help or advice how can I make it?

Thanks
0
Hi,

I am curious if someone knows the best way to set alerts based on certain keywords for financial filings such as 8K, 10K etc. For example, I want to create an alert such that when the following filing appears on the website and has a keyword like "PSU", I get an alert.https://www.sec.gov/Archives/edgar/data/1115128/000156459017019148/quot-8k_20170928.htm

Thanks
0
Hello,

When we create datanodes ,  for the disks do we need to use local disks or SAN disks?  Most of them are recommending the local disks. Why do we need to have local disks?
0
detail data blocks will not query when one of them change
0
I had this question after viewing Advice for vb.net web application structure with code generator - refactoring, rewrite, change ORM?.

Hi Mr. tablaFreak,

Actually i was looking for similar code generator that enables me to create data-intensive asp.net web application in vb.net and after reading this article i think this is the best performance approach for CRUD operations with big data, but i am really not aware of how to bind class records to write literal HTML code in the code behind as you mentioned, so kindly provide your your code generator along with few samples that can help for the same

Your assistance is highly appreciated.
My email is SherifMazar@gmail.com
0
I'm working on an ad campaign management app. There's a feature there advertisers can assign caps for campaign based on spending or conversions in daily, monthly or lifetime basis and there would be multiple caps per ad campaign. As soon as one campaign reaches the 80% we will send notification to all publishes and once caps are reached we have to stop the campaign immediately. We're receiving thousands of event per second. Currently what i'm doing is querying reporting table every second but it's quite inefficient and sometimes campaigns already exceeded the caps when I detect it. So my question is;

What're the existing efficient programmatic or architectural solution's in industry to handle these kind of situations?
0
I have large numbers PDF document, from which I need to extract text. The extracted text I use for further processing. I did this for a small subset of documents using Tesseract API in a linear approach and I get the required output. However, this takes a very long time when I have a large number of documents.

I tried to use the Hadoop environment processing capabilities (Map-Reduce) and storage (HDFS) for solving this issue. However, I am facing problem to implement Tesseract API into the Hadoop (Map-Reduce) approach. As Teserract converts the files into intermediate image files, I am confused as to how intermediate result Image files of Tesseract-API-process can be handled inside HDFS.

I have searched and unsuccesfully tried a few options earlier like:

    I have extracted text from PDF by extending FileInputFormat class into my own PdfInputFormat class using Hadoop-Map-Reduce, for this i used Apache PDFBox to extract text from pdf, but when it comes to scanned-pdf's which contains image, this solution does not give me the required results.

    I found few answers on the same topic stating to use -Fuse and that will help or one should generate image files locally and than upload those into hdfs for further processing. Not sure if this is the correct approach.

Would like to know approaches around this.
0
Free Tool: SSL Checker
LVL 11
Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

HI,

I am trying to find sample dataset about (cloud) storage server file access logs to conduct my research project. Can anyone please suggest any ideas or places to find this type of sample files? I think maybe something like FTP server's log dataset. because my project focus on file access not web page access.

Thanks in advance.
0
I have an LDAP directory which contains huge volume data. (Appx 200 million). The requirement is to download the full 200 M data to files. Current scripts pull data based on certain search criteria and using such search parameter using LDAP SEARCH command. The volume of data pulled is appx 100 M and time taken is 10 hours.  Is there a better option to optimize the search command so as to have the 200 M records  downloaded in appx 5 hours or so. Any suggestion are welcome
0
i have to process 100-200 gigs of text files in a day with 2gb each

currently my python code architecture is like:

def parsers(data):
    if (-----):
        regex_email(data)
    elif(----):
        regex_ip(data)
    elif(----):
        regex_url(data)

now i want to call multiple instances of parsers method at a time on different files with calling of regex methods in parallel.
0
I've been reading into Microsoft Delve, and the ability to understand one's working habits.

Is this considered big data analytics?

How does it work exactly?
0
I am writing a mapreduce program in hadoop and have executed it successfully. Below is the attached snapshot of the output stating keys with its values. Now out of these values I need the top 5 values. I wrote the following command in the terminal

hadoop fs -cat /home/yogesh/Work/outputs/part-r-00000 | sort –n –k2 –r | head  –n5

Now I am getting this error:  I have attached the error snapshot as wellMy mapreduce program got executed successfully. This is the output of HDFS.
head: cannot open ‘–n5’ for reading: No such file or directory
sort: cannot read: –n: No such file or directory
cat: Unable to write to output stream.

I think due to some permission issue, it is giving this error.

Please Help how I can solve this problem. Any ideas are highly appreciated. Is it something I need to add in hdfs-site.xml?
hadoop-command-and-error.png
0
I have an FTP program that allows me to schedule times for that runs scripts so we connect to FTP sites and download documents into folders that are created that day.  We recently had a change with one of the sites so that it is no longer just FTP it is encrypted and using WinSCP.  This works if someone manually connects and finds the files and downloads them, but I am tasked with making this happen again automatically.  I have run into the problem with my existing program not having a way to enter the passphrase for the key that is required after the initial login name and password.  I have read about scripts that could be created and used within WINSCP but none that address my problem.  I have looked at other software packages but found none so far that will work.  Does anyone have a script that will allow the login with the username, password and then after screen comes up asking for passphrase it is entered so that I can try to salvage the pieces of my existing script which creates a new folder each day using the date as the folder name and downloading the files from the site.  I am not a script person which is part of my problem, but I can understand the basics so if someone could share this info or perhaps let me know if there is a program that will do this it would be greatly appreciated.
0
We are looking for social media analytic tool (one tool) that supports the following:

1- it can automatically pull data periodically  from certain Facebook page and Twitter account , then
2- it can analyze data statistically and sentimentally .
3- its sentiment engine can be updated by adding custom user keywords.
4- it can categorize posts by topic like ( maintenance, service, news, complaint... )
5- it can provide result as structured raw data , so we can  build custom reports based on provided data on other platform ,for example:
list of posts [ post id,Post ,Topic (or category) , sentiment analysis for comments(# of positive . # of neutral , # of negative ) , # of like , # of share ,Created date...]
list of analyzed comments for each post [ Post ID,Comment , sentiment (positive , neutral , negative ) , Created date, location .... ]
0

Big Data

Big data describes data sets that are so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, data curation, search, sharing, storage, transfer, visualization, querying and information privacy. The term often refers simply to the use of predictive analytics or certain other advanced methods to extract value from data, and seldom to a particular size of data set.

Top Experts In
Big Data
<
Monthly
>