• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. Custom IC SKILL
  3. What is the most efficient approach to record each line...

Stats

  • Locked Locked
  • Replies 16
  • Subscribers 143
  • Views 18432
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

What is the most efficient approach to record each line from a input file ?

Charley Chen
Charley Chen over 14 years ago

 Hi All,

I use table to record each line & its value , but it will become slower and slower when time left .

Though I can get table[A11100] = list(1000000 2000000 3000000 4000000) very quickly , But must when it finished loop.

What is the best way to do it ?

;write a template file
        ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 
        getCurrentTime()
        outPort = outfile("test")
        count = 1000000
        fileCount = 1
        for(i 1 count
            fprintf(outPort "1000000 2000000 3000000 4000000\n")
        );   
        close(outPort)
        getCurrentTime()
        ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 

        ;read each line to record
        ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 
        nextLine = nil
        inPort = infile("test")
       getCurrentTime()
 when(inPort
      fileCount = 1
      table = makeTable("table" nil)
      while(gets(nextLine inPort)
              qq = parseString(nextLine "\n")
              str = sprintf(nil "A%d" fileCount)
              table[str] = list(1000000 2000000 3000000 4000000)
              fileCount++
      );while   
 );when
      inPort = nil
      getCurrentTime()
        ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

 

Thank you,

Charley

  • Cancel
Parents
  • Andrew Beckett
    Andrew Beckett over 14 years ago

    Charley,

    As I've mentioned several times - the SKILL profiler would help. In general it would be best to profile your real code (I presume this is a toy example), but when I profiled the read above, the time was mostly spent in gc (garbage collection). This I ran with a local file, and it spent 377 seconds out of 383 (in IC5141) in gc. Note that gc does not mean necessarily that there's garbage, but when it needs to allocate a chunk more memory for various objects it will first try to scan and collect garbage before allocating new space. 

    Because you're creating a lot of string and list cells (1 million and 4 million respectively), I can tell SKILL to pre-allocate memory for these. If I add:

    needNCells('list 4000000)
    needNCells('string 1000000)

    before reading the file, the reading loop takes 5.6 seconds (including population of the table).

    Note that in IC615 (not sure quite when in IC61 it happened) work was done on the chunk sizes so that it spends less time in gc - it was reduced to 105 seconds. But with the preallocation, it drops down to nothing in gc.

    You can also see the effect of this if you read the file twice - that's because the table will get recreated and all the previous keys and content will become garbage and reclaimed - once - but it then already has plenty allocated.

    gcsummary() is useful to find out what is allocated.

    Regards,

    Andrew.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
Reply
  • Andrew Beckett
    Andrew Beckett over 14 years ago

    Charley,

    As I've mentioned several times - the SKILL profiler would help. In general it would be best to profile your real code (I presume this is a toy example), but when I profiled the read above, the time was mostly spent in gc (garbage collection). This I ran with a local file, and it spent 377 seconds out of 383 (in IC5141) in gc. Note that gc does not mean necessarily that there's garbage, but when it needs to allocate a chunk more memory for various objects it will first try to scan and collect garbage before allocating new space. 

    Because you're creating a lot of string and list cells (1 million and 4 million respectively), I can tell SKILL to pre-allocate memory for these. If I add:

    needNCells('list 4000000)
    needNCells('string 1000000)

    before reading the file, the reading loop takes 5.6 seconds (including population of the table).

    Note that in IC615 (not sure quite when in IC61 it happened) work was done on the chunk sizes so that it spends less time in gc - it was reduced to 105 seconds. But with the preallocation, it drops down to nothing in gc.

    You can also see the effect of this if you read the file twice - that's because the table will get recreated and all the previous keys and content will become garbage and reclaimed - once - but it then already has plenty allocated.

    gcsummary() is useful to find out what is allocated.

    Regards,

    Andrew.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
Children
No Data

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information