Solved

Appending a file to another

Posted on 2002-03-15
47
376 Views
Last Modified: 2010-04-05
Hi...
   I'm using D3 Pro - is there some super-quick way of appending a file of text records to an already-existing text file; other than opening the 1st file and reading and writing each record to the 2nd file? I've tried Blockread/Blockwrite, but that's even slower (a *lot* slower) than doing regular reads and writes. I have to do this 'append' of one file to another, thousands of times within my app and it's a real bottleneck for me.
   Like something similar to the DOS 'Copy' command... or maybe in fact the DOS Copy command itself - if there's a way to invoke it without having that scrubby black DOS window appear while my app runs.

Thanks
  Shawn
0
Comment
Question by:aztec
  • 18
  • 8
  • 7
  • +4
47 Comments
 
LVL 3

Accepted Solution

by:
SteveWaite earned 13 total points
ID: 6870368
For the dos solution:
I create a batch file (Text File) on the fly, wait for it to appear in the file system with a loop and timeout. Then I use:
ExecuteFile(BatchFile, '', TheDirectory, SW_SHOWMINNOACTIVE);
and don't get a black box!

Otherwise i would do a proper file copy like the one in
examples Filmnx but adjust to concat.

Regards,
Steve
0
 

Author Comment

by:aztec
ID: 6870391
Thanks Steve...
   I follow you on the DOS solution, but this one:

"Otherwise i would do a proper file copy like the one in
examples Filmnx but adjust to concat."

...I don't know what you mean. Can you explain in a little more detail?

Off the top of your head, which approach would you think is the faster one (executing-wise) ?

Thanks
   Shawn
0
 
LVL 5

Expert Comment

by:Gwena
ID: 6870496
Hmmm... this seems to work
Just put 3 buttons and an open file dialog thingie on a form and use this source.... label button1 as 'load file 1'
, button2 as 'load file 2' and button3 as 'combine files'
This is the best I could come up with in 10 minutes at midnight ... so It's probably got a bug or 2 :-)

after running this your 1st file will have your 2nd file appended onto it's end.....I hope... and this should be fast... I hope :-)





unit Unit1;

interface

uses
  Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms,
  Dialogs, StdCtrls;

type
  TForm1 = class(TForm)
    Button1: TButton;
    Button2: TButton;
    Button3: TButton;
    OpenDialog1: TOpenDialog;
    procedure Button1Click(Sender: TObject);
    procedure Button3Click(Sender: TObject);
    procedure Button2Click(Sender: TObject);
  private
    { Private declarations }
  public
    { Public declarations }
  end;

var
  Form1: TForm1;
  File1Stream,File2Stream: TMemoryStream;
  File1Name,File2Name: String;

implementation

{$R *.dfm}

procedure TForm1.Button1Click(Sender: TObject);
begin
OpenDialog1.Execute;
File1Name := OpenDialog1.FileName;
end;

procedure TForm1.Button2Click(Sender: TObject);
begin
OpenDialog1.Execute;
File2Name := OpenDialog1.FileName;
end;

procedure TForm1.Button3Click(Sender: TObject);
begin
  File1Stream := TMemoryStream.Create;
  File2Stream := TMemoryStream.Create;
  try
    File1Stream.LoadFromFile(File1Name);
    File1Stream.Position := File1Stream.Size;
    File2Stream.LoadFromFile(File2Name);
    File2Stream.Position := 0;
    File1Stream.CopyFrom(File2Stream,File2Stream.Size);
    File1Stream.SaveToFile(File1Name);
  finally
    File1Stream.Free;
    File2Stream.Free;
  end;
end;



end.


0
 

Author Comment

by:aztec
ID: 6870555
Gwena... when it does the 'LoadFromFile', what if the file to load is several hundred megs big (which could be possible)? Won't that blow my computers memory out of the water and crash?

Thanks
   Shawn
0
 
LVL 6

Expert Comment

by:edey
ID: 6870577
umm, I'd use gwena's solution, but with TFileStream instead. That way you avoid the whole mem thing. Something like the following (untested) should work:

var
   src,dst : TFileStream;
begin
     src := TFileStream.create(the_src_file,fmOpenRead or fmShareDenyNone);
     dst := TFileStream.create(the_dst_file,fmOpenReadWrite or fmShareExclusive);
     dst.seek(soFromEnd,0);
     dst.CopyFrom(src,0);
     dst.Free;
     src.free;


GL
Mike

ps - the doc's say that copyFrom with a count of 0 should copy the entire stream.
0
 
LVL 5

Expert Comment

by:Gwena
ID: 6870600
WoW!  Those are BIG files you want to handle!

You would get an out of mem error when trying to handle a stream that big I think.....

With files so large speediness is out of the question I think!

The only way I can think of that it would be possible to combine such huge files very quickly would be to somehow alter the fat and add the 2nd file to the chain of the first ??? is that possible??? I just don't know....

Any other method seems to involve re-writing a lot of data to disk... and it takes time just to read or write that much data.

Maybe a scheme could be designed to leave all the files as they are and simply log all the changes.... then at some later time the list of changes could be used as a guide to build a final file?  ie  MyFile = datafile1 + datafile7 + datafile 98 at loc 126,234,001 to end + whatever.... you get the idea.... this scheme would postpone all the time consuming tasks until a time when the user would not be sitting there slack-jawed waiting for the operation to finally end... maybe late at nite or some such.  

Good Luck!

maybe Madshi knows a way to do this kind of file handling quickly ... but maybe it's more than 50 points worth of knowledge :-)
0
 
LVL 5

Expert Comment

by:Gwena
ID: 6870603
Hi edey... that sounds good.... I just don't have any real experience with handling HUGE data files :-)
0
 
LVL 6

Expert Comment

by:edey
ID: 6870683
I just _had_ to try a little experiment ;p

var
   src,dst : TFileStream;
   ix,iy,h : integer;
   kSrc,kDst : string;
   t : TTimeStamp;
   buf : array[0..1023] of byte;
begin
     h := fileCreate('e:\src.txt');
     memo1.text := intToStr(h);
     fileClose(h);
     h := fileCreate('e:\dst.txt');
     memo1.lines.add(intToStr(h));
     fileClose(h);
     src := TFileStream.Create('e:\src.txt',fmOpenReadWrite or fmShareExclusive);
     src.Size := 400*1024*1024;
     src.Seek(soFromBeginning,0);
     kSrc := 'SRC ';
     dst := TFileStream.Create('e:\dst.txt',fmOpenReadWrite or fmShareExclusive);
     dst.Size := 400*1024*1024;
     dst.Seek(soFromBeginning,0);
     kDst := 'DST ';

     t := dateTimeToTimeStamp(now);
     for ix := 0 to 1023 do
     begin
          for iy := 0 to 1023 do
          try
             src.write(pointer(kSrc)^,4);
             dst.write(pointer(kDst)^,4);
          except
          end;
          caption := intToStr((100*ix)div 1023);
     end;
     dst.free;
     src.free;
     memo1.lines.add('Done Creating Files - '+intToStr((dateTimeToTimeStamp(now).Time-t.Time)div 1000)+' seconds');

     src := TFileStream.Create('e:\src.txt',fmOpenReadWrite or fmShareExclusive);
     src.Seek(soFromBeginning,0);
     dst := TFileStream.Create('e:\dst.txt',fmOpenReadWrite or fmShareExclusive);
     dst.Seek(soFromEnd,0);
     dst.Size := dst.size*2;

     t := dateTimeToTimeStamp(now);
     for ix := 0 to 1023 do
     try
        src.Read(buf,1024);
        dst.write(buf,1024);
        caption := intToStr((100*ix)div 1023);
     except
     end;
     dst.free;
     src.free;
     memo1.lines.add('Done Appending Files - '+intToStr((dateTimeToTimeStamp(now).Time-t.Time)div 1000)+' seconds');
end;

Which gave me these results (for creating & appending two 400mb text files)

Done Creating Files - 41 seconds
Done Appending Files - 15 seconds

This was done on my machine (thunderbird 900, 512mb ram, run of the mill 7200rpm hd's).

GL
Mike
0
 

Expert Comment

by:DelFreak
ID: 6871330
Listening...
0
 
LVL 5

Expert Comment

by:Gwena
ID: 6871581
Hey Edey :-)

That looks great.... much faster than I would have imagined possible....
I just upgraded my computers... one has an AMD 1600+ ..
I will have to time that code on it... but I doubt it's much faster.... my drives are old 5400's and I think this is pretty disk bound... :-)
0
 
LVL 3

Expert Comment

by:SteveWaite
ID: 6871726
FmxUtils.pas in Demos\Doc\Filmanex
Shows how to copy files, unconcerned about the size, and fast, choose 4k blocks. TFileStream loads the whole file and gets worse as file size increases, this CopyFile method does a chunk at a time. Alter slightly to achieve what you need. Shows a lot of file handling know-how in that unit.

Regards,
Steve
0
 
LVL 6

Expert Comment

by:edey
ID: 6872055
hmmm an interesting example, though it looks like it uses 8k chunks (D6):

const
  ChunkSize: Longint = 8192; { copy in 8K chunks }

Which actually makes sense as a fat32 partition of 8-32gb would have an 8kb cluster size, so I would assume that the win32 api would be able to optomize 8kb aligned writes.

One interesting thought though - my/gwena's example used streams & the FMX demo used functions that delt with file handles directly, they do boil down to the same thing. Hence, if your using TFileStreams with 8k buffers (instead of the 4bytes I used to created the files & then 1kb when concatenating them) they should compile to pretty much the same thing. Well, my example would have a little more indirection, but 'tis insignificant in comparison to the disk access bottle neck.

GL
Mike

(from classes.pas)
function THandleStream.Read(var Buffer; Count: Longint): Longint;
begin
  Result := FileRead(FHandle, Buffer, Count);
  if Result = -1 then Result := 0;
end;

function THandleStream.Write(const Buffer; Count: Longint): Longint;
begin
  Result := FileWrite(FHandle, Buffer, Count);
  if Result = -1 then Result := 0;
end;
0
 

Author Comment

by:aztec
ID: 6872109
Thanks for your suggestions everyone. I've been doing a little testing. Let me just give you a bit more detail on what I'm doing : In my app, I create a "temp" text file, then write some records to it, then append this temp file to an existing permanent file. Then I clear out the temp file ("rewrite") and start all over again. This process is repeated tens of thousands, even possibly hundreds of thousands of times per run. Most of the time, this temp file is quite small (only a few K), but it *could* potentially be a few hundred megs big (this is possible, and I must accomodate for this).
   Anyway, in some testing I did, I determined conclusively that the thing that is slowing my app down the most is the clearing out of my temp file for rewriting - ie. the doggone "rewrite(tempfile)" statement! As it turns out, *this* is the slowpoke... and not so much the mode of appending the records to the permanent file.

Now I have to switch gears and figure out a faster way of wiping out my temp file and creating a new one for writing to. Would "TFileStream.Create" be faster in this regard?

Thanks
   Shawn
0
 
LVL 6

Expert Comment

by:edey
ID: 6872161
If I were you I wouldn't "rewrite" the file, but rather do it yourself. Keep a counter of how many records you have in the temp file, then when you're ready, append that many to the other file & set the counter to zero. Increase the size of the temp file if need be, just never resize it smaller.

GL
Mike
0
 

Author Comment

by:aztec
ID: 6872201
Thanks for your suggestions everyone. I've been doing a little testing. Let me just give you a bit more detail on what I'm doing : In my app, I create a "temp" text file, then write some records to it, then append this temp file to an existing permanent file. Then I clear out the temp file ("rewrite") and start all over again. This process is repeated tens of thousands, even possibly hundreds of thousands of times per run. Most of the time, this temp file is quite small (only a few K), but it *could* potentially be a few hundred megs big (this is possible, and I must accomodate for this).
   Anyway, in some testing I did, I determined conclusively that the thing that is slowing my app down the most is the clearing out of my temp file for rewriting - ie. the doggone "rewrite(tempfile)" statement! As it turns out, *this* is the slowpoke... and not so much the mode of appending the records to the permanent file.

Now I have to switch gears and figure out a faster way of wiping out my temp file and creating a new one for writing to. Would "TFileStream.Create" be faster in this regard?

Thanks
   Shawn
0
 

Author Comment

by:aztec
ID: 6872243
A little testing showed that using :

tempfile.Free;
tempfile:=TFileStream.Create('tempfile.txt', fmCreate);

is MUCH faster than using the Rewrite(tempfile) method. Looks like I'm on the right track. I'd like to speed it up even more if possible by not having to call 'Free', then 'Create' each time. Is it possible to only 'Create' my tempfile once at the start, and then somehow 'overwrite' it, without having to create it all over again?

Thanks
   Shawn
0
 
LVL 6

Expert Comment

by:edey
ID: 6872254
yes, as I mentioned in my last comment, you just have to keep track of how big the temp file should be currently. Oh, and to return to the "top" of the stream, you can seek(soFromBeginning,0);

GL
Mike
0
 

Author Comment

by:aztec
ID: 6872347
A little testing showed that using :

tempfile.Free;
tempfile:=TFileStream.Create('tempfile.txt', fmCreate);

is MUCH faster than using the Rewrite(tempfile) method. Looks like I'm on the right track. I'd like to speed it up even more if possible by not having to call 'Free', then 'Create' each time. Is it possible to only 'Create' my tempfile once at the start, and then somehow 'overwrite' it, without having to create it all over again?

Thanks
   Shawn
0
 

Author Comment

by:aztec
ID: 6872355
Mike, so if I use this 'seek' command and commence writing from the beginning again - does this wipe out all the old previous data in the file... ?

Thanks
   Shawn
0
 

Author Comment

by:aztec
ID: 6872376
Mike, so if I use this 'seek' command and commence writing from the beginning again - does this wipe out all the old previous data in the file... ?

Thanks
   Shawn
0
 
LVL 6

Expert Comment

by:edey
ID: 6872476
mmm, seek just changes the current place you're reading from, or writing to, in the stream. It doesn't modify the data in anyway. So after you copy your temp records, and seek back to the start of temp, you can consider the temp stream to be full of junk. That's why you'd need the counter I mentioned., an ex:

We do some manipulations in temp, resulting in 10 records to be appended.
Temp is not 10 records long, and counter = 10.
So we copy the first 10 records from temp to... call it list and set counter := 0;
Temp is still 10 records large, but filled with useless data. Consider it uninitialized, if it matters.
We do some more manipulations, resulting in 3 records (and in the process we'd have incremented counter, so counter now = 3).
We copy the first counter (3) records to list, set counter to 0, seek to the "top", and begin all over again.

This way we resize temp _only_ if it needs to get larger, and (if temp doesn't have to be initialized) we don't have to bother "zeroing" out all the old records.

GL
Mike
0
 

Author Comment

by:aztec
ID: 6873159
Mike - TFileSTream looked like it would handle my needs, but I see another snag - When it's time to read each record from my temp file via the Read command for FileSTreams, I won't have any way of knowing the exact number of bytes to read in. I'm writing variable length text strings to my temp file that end with a CR/LF, how do I read back a line at a time with the FileStream Read?

Thanks
   Shawn
0
 

Author Comment

by:aztec
ID: 6873515
Another question also - how many of these TFileStream objects can I have open at once? Is it RAM dependent or something? My app potentially could require a few thousand of these open at the same time.

Thanks
   Shawn
0
Free Trending Threat Insights Every Day

Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily.

 
LVL 14

Expert Comment

by:AvonWyss
ID: 6873687
No, seeking as such does not wipe out the data. But setting its Size does:

tempfile.Size:=0;

Will reset the stream and set your file pointer to the start of the file.
0
 
LVL 5

Expert Comment

by:Gwena
ID: 6874519
Hi Aztec :-)

'A few thousand of these open at the same time'

Holy moly... maybe it's time to think hard about your method of doing all this... whenever I try and write something and end up with a large number of open files or streams or whatever I start trying to come up with another way to do things... Too much of anything in not good! and thousands of filestreams really does not sound good :-)

0
 

Author Comment

by:aztec
ID: 6874561
Speed is a priority Gwena - having them all open saves me from closing and re-opening over and over again, which really slows down execution. I know it's not terribly efficient to have them all open at once, but for the sake of speed, I think I have to.

Shawn
0
 
LVL 14

Expert Comment

by:AvonWyss
ID: 6874934
aztec, I think that Gwena wanted to say that there may be a better method to achieve the same you're achieving now with less resource waste and overhead. And I agree, but not knowing the exact task you're trying to solve, we cannot give you specific hints.
0
 

Author Comment

by:aztec
ID: 6874940
Speed is a priority Gwena - having them all open saves me from closing and re-opening over and over again, which really slows down execution. I know it's not terribly efficient to have them all open at once, but for the sake of speed, I think I have to.

Shawn
0
 
LVL 14

Expert Comment

by:AvonWyss
ID: 6874954
Shawn, have you noticed that all your comments are posted twice? Be aware that this could be due to hitting the "Refresh" button of your browser, which actually also re-posts the comment. You should use the address from the notification email, and the page will reload properly without re-posting.
0
 
LVL 3

Expert Comment

by:SteveWaite
ID: 6875250
edey, hi!
Don't know why such an issue has raised your brow but..
I'm no expert (ehem) but last time i read, windows file system caches were in 4k chunks. The sector size is another thing. You may have 8k sectors on your drive but thats cos the hard disks are getting big. What this means to performance, I do not know. Benchmarks on my system with 25Mb files with 4k chunksize was faster (Win98 at the time and, um, 4k sectors) but marginal. I have a data collection app handling these file sizes. I suspect there is a lot more to be considered such as swapfile location etc.

Apparently I heard we should be using 'file mapping'. I'm not sure if thats not what we are talking about here anyway but does anyone know if there is more to file handling in the api?

Regards,
Steve
0
 
LVL 3

Expert Comment

by:SteveWaite
ID: 6875402
Said sector, meant cluster, you guessed.

Just a thought, the 'hard disk' is actually a virtual drive provided by the windows o/s, hmm.

You may want to, try createing a new file name each time instead of rewrite-ing and remember the file names in another file. Have the spent files deleted later?

Or how about instead, sticking your data in blob fields with the bde or something?
0
 
LVL 3

Expert Comment

by:SteveWaite
ID: 6875425
..more, Shawn, sorry, you also asked what I thought about the dos method and its potential speed. I think you should be able to get the best result programmatically.

..or what about only ever adding to one file, but add an extra text with each 'block' to give your program whatever knowledge it requires (since we made just one file)..
0
 
LVL 3

Expert Comment

by:SteveWaite
ID: 6875438
File Mapping Functions
The following functions are used with file mapping.

Function Description
CreateFileMapping Creates or opens a named or unnamed file-mapping object for the specified file.
FlushViewOfFile Writes to the disk a byte range within a mapped view of a file.
MapViewOfFile Maps a view of a file into the address space of the calling process.
MapViewOfFileEx Maps a view of a file into the address space of the calling process.
OpenFileMapping Opens a named file-mapping object.
UnmapViewOfFile Unmaps a mapped view of a file from the calling process's address space.


may be worth checking out
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/filemap_79wn.asp

Regards,
Steve
0
 

Author Comment

by:aztec
ID: 6875843
Hi guys...
   well, I've implemented the TFileStream approach (only 2 files to start, not thousands). It will read and write for a little while successfully, then throw up this error:

"access violation at 0x4021dc: write of address 0x776f6e87
83 20 FE 8B 42 04 3B D0 75 1A 8B C3"

...anybody ever run into something like that before?

Here's my procedure that does the read/write:

procedure tempappend;
var
  CopyBuffer : Pointer;
  ibytesOK, ibytesWr : LongInt;
CONST
  LumpSize : LongInt = 4096;
begin
  tempfile.Seek(soFromBeginning,0);

  repeat
    ibytesOK:=tempfile.Read(CopyBuffer^, LumpSize);
    ibytesWr:=outputfile.Write(CopyBuffer^, ibytesOK);
  until (ibytesOK < LumpSize);
 
  tempfile.Size:=0;
  tempfile.Seek(soFromBeginning,0);

end; {* tempappend *}


...I added that 2nd Seek statement at the bottom after clearing out the temp file, but it makes no difference.

The data that it *does* write to the files looks fine, it just crpas out after awhile all by itself...weird.
0
 

Author Comment

by:aztec
ID: 6876019
a little more info - Shut my system down, and re-started and I put some display statements in to trace where it bombs out. Looks like it crashed on the .Read statement:

ibytesOK:=tempfile.Read(CopyBuffer^, LumpSize);

...came up with this error window

Access Violation at address 63634068. Read of address 63634068.

Thanks
   Shawn
0
 

Author Comment

by:aztec
ID: 6876032
whoops, never mind. Found my mistake...forgot my GetMem statement for the CopyBuffer pointer! duhhh

sorry....
0
 
LVL 3

Expert Comment

by:SteveWaite
ID: 6877406
In nt you can have up to about 2^30 Kernel Object Handles
(File handles etc.) per process. Go collect those files Shawn!
0
 

Author Comment

by:aztec
ID: 6877472
Steve, I can have 2^30 TFileStream objects open at once??
0
 
LVL 3

Expert Comment

by:SteveWaite
ID: 6877792
I doubt it but that's what they say, theoretically though!

It depends on your system.

Another thought. Your app has access to 4Gb of address space but if you have, say, 128Mb RAM instead of 4Gb RAM(!) the rest of the 4Gb spills over onto your disk. When you fill a TMemoryStream on this 128Mb system eventually you are going to be just writing back to disk! (your virtual RAM).

These considerations are worth knowing about when programming large file support.

Steve


0
 

Author Comment

by:aztec
ID: 6880671
Well, I've got things working with the TFileStream solution. Thanks to all of you for your comments and suggestions, but it looks like the info I used in my solution came mainly from Edey (for the TFileStream stuff) and AvonWyss (for info about the .seek command).

To be fair - is there any way to split the points between these 2 guys?

Cheers
   Shawn
0
 
LVL 14

Expert Comment

by:AvonWyss
ID: 6881224
Sure, that's OK with me... ;-)
0
 
LVL 6

Expert Comment

by:edey
ID: 6881365
Hey any points distribution is fine with me ;p
You can get them split - you have to post a (0 point) question in the C/S topic asking for the points to be reduced. Then ask another question in the delphi topic for x points, being sure to leave a link here so the nosy types know what the points are for ;)

GL
Mike
0
 

Author Comment

by:aztec
ID: 6881474
what's the C/S topic?
0
 
LVL 14

Expert Comment

by:AvonWyss
ID: 6881481
Its the one labeled "Community Support" on the left nav' bar of the page.
http://www.experts-exchange.com/jsp/qList.jsp?ta=commspt
0
 
LVL 1

Expert Comment

by:Moondancer
ID: 6939338
ADMINISTRATION WILL BE CONTACTING YOU SHORTLY.  Moderators Computer101 or Netminder will return to finalize these if still open in seven days.  Please post closing recommendations before that time.

Question(s) below appears to have been abandoned. Your options are:
 
1. Accept a Comment As Answer (use the button next to the Expert's name).
2. Close the question if the information was not useful to you, but may help others. You must tell the participants why you wish to do this, and allow for Expert response.  This choice will include a refund to you, and will move this question to our PAQ (Previously Asked Question) database.  If you found information outside this question thread, please add it.
3. Ask Community Support to help split points between participating experts, or just comment here with details and we'll respond with the process.
4. Delete the question (if it has no potential value for others).
   --> Post comments for expert of your intention to delete and why
   --> YOU CANNOT DELETE A QUESTION with comments; special handling by a Moderator is required.

For special handling needs, please post a zero point question in the link below and include the URL (question QID/link) that it regards with details.
http://www.experts-exchange.com/jsp/qList.jsp?ta=commspt
 
Please click this link for Help Desk, Guidelines/Member Agreement and the Question/Answer process.  http://www.experts-exchange.com/jsp/cmtyHelpDesk.jsp

Click you Member Profile to view your question history and keep them updated as the collaboration effort continues, to maintain your open and locked questions.  If you are a  KnowledgePro user, use the Power Search option to find them.  Anytime you have questions which are LOCKED with a Proposed Answer which does not serve your needs, please reject it and add comments as to why.  In addition, when you do grade the question, if the grade is less than an A, please add a comment as to why.  This helps all involved, as well as future persons who may access this item for help.

To view your open questions, please click the following link(s) and keep them all current with updates.
http://www.experts-exchange.com/questions/Q.20245156.html
http://www.experts-exchange.com/questions/Q.20259219.html
http://www.experts-exchange.com/questions/Q.20263069.html
http://www.experts-exchange.com/questions/Q.20270808.html
http://www.experts-exchange.com/questions/Q.20269981.html
http://www.experts-exchange.com/questions/Q.20277592.html
http://www.experts-exchange.com/questions/Q.20279091.html


To view your locked questions, please click the following link(s) and evaluate the proposed answer.
http://www.experts-exchange.com/questions/Q.20263454.html
http://www.experts-exchange.com/questions/Q.20269773.html
http://www.experts-exchange.com/questions/Q.20279093.html

**** PLEASE DO NOT AWARD THE POINTS TO ME. *****
 
------------>  EXPERTS:  Please leave your closing recommendations if this item remains inactive another seven (7) days.  If you are interested in the cleanup effort, please click this link http://www.experts-exchange.com/jsp/qManageQuestion.jsp?ta=commspt&qid=20274643
POINTS FOR EXPERTS awaiting comments are listed here -> http://www.experts-exchange.com/commspt/Q.20277028.html
 

Moderators will finalize this question if in @7 days you have not responded.  They will either move this to the PAQ (Previously Asked Questions) at zero points, delete it or awarding expert(s) when recommendations are made, or an independent determination can be made.  Expert input is always appreciated to determine the fair outcome.
 
Thank you everyone.
 
Moondancer
Moderator @ Experts Exchange
0
 
LVL 14

Expert Comment

by:AvonWyss
ID: 6939857
Moondancer, as you can see in the comment from 03/19/2002 11:32AM PST, aztec has decided to split points.
0
 
LVL 1

Expert Comment

by:Moondancer
ID: 6940048
Thank you, AvonWyss, I have processed this point split.

I apologize for not reading all the question content in which I post these reminders, but given the volumes and recently implementing an API to process these thousands of items, can't possibly read them all and get the job done.  Your information here is very much appreciated.

Points for Gwena - http://www.experts-exchange.com/jsp/qShow.jsp?qid=20289144
Points for edey - http://www.experts-exchange.com/jsp/qShow.jsp?qid=20289145
Points for AvonWyss - http://www.experts-exchange.com/jsp/qShow.jsp?qid=20289146

Thanks to everyone.

In the future, aztec, the point split process can be handled by you directly, as noted above, through Community Support with a zero point question.  What we'll do for you is to change the original point value, refunding the rest to you so you can award one expert within the primary question and then post a new question for each expert you wish to award as well in the same topic area.  The title would be Points for __expertname__ and in the comments field, just post the question link so they know which items it concerns.

This will then keep all your question history together for you, rather than having a Moderator process them for you, which means that you'll no longer have this specific transaction on the point-splits in your own personal history.

Moondancer - EE Moderator
0

Featured Post

What Is Threat Intelligence?

Threat intelligence is often discussed, but rarely understood. Starting with a precise definition, along with clear business goals, is essential.

Join & Write a Comment

This article explains how to create forms/units independent of other forms/units object names in a delphi project. Have you ever created a form for user input in a Delphi project and then had the need to have that same form in a other Delphi proj…
In this tutorial I will show you how to use the Windows Speech API in Delphi. I will only cover basic functions such as text to speech and controlling the speed of the speech. SAPI Installation First you need to install the SAPI type library, th…
This video gives you a great overview about bandwidth monitoring with SNMP and WMI with our network monitoring solution PRTG Network Monitor (https://www.paessler.com/prtg). If you're looking for how to monitor bandwidth using netflow or packet s…
In this tutorial you'll learn about bandwidth monitoring with flows and packet sniffing with our network monitoring solution PRTG Network Monitor (https://www.paessler.com/prtg). If you're interested in additional methods for monitoring bandwidt…

706 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

19 Experts available now in Live!

Get 1:1 Help Now