Link to home
Start Free TrialLog in
Avatar of Bruce Gust
Bruce GustFlag for United States of America

asked on

How can I make this more efficient?

I'm parsing out a JSON file using the following code:

 //This input should be from somewhere else, hard-coded in this example
$file_name = '00_8ptcd6jgjn201311060000_day.json.gz';
// Raising this value may increase performance
$buffer_size = 4096; // read 4kb at a time
$out_file_name = str_replace('.gz', '', $file_name); 
// Open our files (in binary mode)
$file = gzopen($file_name, 'rb');
$out_file = fopen($out_file_name, 'wb'); 
// Keep repeating until the end of the input file
while(!gzeof($file)) {
// Read buffer-size bytes
// Both fwrite and gzread and binary-safe
  fwrite($out_file, gzread($file, $buffer_size));
}  
// Files are done, close files
fclose($out_file);
gzclose($file);

$jsondata = file_get_contents("00_8ptcd6jgjn201311060000_day.json");
$json = json_decode($jsondata, true);
//echo $json
$output = "<ul>";
	foreach($json['id'] as $id) {
	$output .= "<h4>".$id."</h4>";
	$output .="<li>".$id['actor/id']."</li>";
	$output .="<li>".$id['actor/displayName']."</li>";
	$output .="<li>".$id['actor/postedTime']."</li>";
	$output .="<li>".$id['generator/displayName']."</li>";
	$output .="<li>".$id['geo/type']."</li>";
	$output .="<li>".$id['geo/coordinates/0']."</li>";
	$output .="<li>".$id['geo/coordinates/1']."</li>";
	}
$output .="</ul>";
echo $output;

Open in new window


The first part, as far as decompressing the file, works fine. The problem comes when I'm printing the output. I get this:

( ! ) Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 1086054108 bytes) in C:\wamp\www\json\uncompress.php on line 30

How can I process things incrementally so I don't time out?
ASKER CERTIFIED SOLUTION
Avatar of Ray Paseur
Ray Paseur
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Bruce Gust

ASKER

Morning, Ray!

I agree. The size of the file is 1,060,592 kb so doing things in stages is going to be essential.

The file, by the way, is a decompressed JSON file that I need to parse and then insert into a database. I've got 365 such files to process that way. After it's all done, I plan on writing a script that exports the parsed results from the database to a csv file.

That's the goal for today.

What do you think?
PHP may not be the right tool for this, or you may need to get a very, very large server and increase the memory limit to the stratosphere.  What instruction is on line 30?
Hey, Ray!

The "instruction" at line 30 is $jsondata=file_get_contents("00_8ptcd6jgjn201311060000_day.json");

Since getting into work this am, I've been trying to figure out how to break the elephant down into bite sized pieces and I've yet to figure it out.

Here's what I've got thus far:

$jsondata = file_get_contents("00_8ptcd6jgjn201311060000_day.json");
//breaking the elephant down into byte sized pieces
$json_size = 4096;
$buffer = fgets($jsondata, $json_size);
$json = json_decode(($buffer), true);
//echo $json
 while (!feof($json))
 {

Open in new window


Problem is, I can't get to "$buffer" because I'm getting hung up $jsondata in light of the file being over 1 GB.

Is there a way to do something like $jsondata=file_get_contents($file_name, $json_size)?

I see how it works with fgets, but how about on the "get_contents" side?
Where did that file come from?  Is there a URL that I can read?

PHP functions are all documented in the online man pages.  Example:
http://php.net/manual/en/function.file-get-contents.php
Ray, here's what I came up with:

$chunk_size=4096;
$url = '00_8ptcd6jgjn201311060000_day.json';
$handle=@fopen($url,'r');
      if(!$handle)
      {
            echo "failed to open JSON file";
      }
while (!feof($handle))
{
$buffer = fgets($handle, $chunk_size);
      if(trim($buffer)!=='')
      {
$obj=json_decode(($buffer), true);
//the rest of my code

It works!

Thanks for your help!