English 中文(简体)
application exits prematurely with OpenMp with the error code : Fatal User Error 1002: Not all work-sharing constructs executed by all threads
原标题:

I added openMp code to some serial code in a simulator applicaton, when I run a program that uses this application the program exits unexpectedly with the output "The thread Win32 Thread (0x1828) has exited with code 1 (0x1)", this happens in the parallel region where I added the OpenMp code, here s a code sample:

#pragma omp parallel for private (curr_proc_info, current_writer, method_h) shared (exceptionOccured) schedule(dynamic, 1) 
    for (i = 0 ; i < method_process_num ; i++)
    {
         current_writer = 0;
        // we need to add protection before we can dequeue a method from the methods queue,

        #pragma omp critical(dequeueMethod)  
        method_h = pop_runnable_method(curr_proc_info, current_writer);

        if(method_h !=0 && exceptionOccured == false){
            try {
            method_h->semantics();
            }
            catch( const sc_report& ex ) {
                ::std::cout << "
" << ex.what() << ::std::endl;
                m_error = true;
                exceptionOccured = true;  // we cannot jump outside the loop, so instead of return we use a flag and return somewhere else
            }

        }
    }

The scheduling was static before I made it dynamic, after I added dynamic with a chunk size of 1 the application proceeded a little further before it exited, can this be an indication of what is happening inside the parallel region? thanks

问题回答

As I read it, and I m more of a Fortran programmer than C/C++, your private variable curr_proc_info is not declared (or defined ?) before it first appears in the call to pop_runnable_method. But private variables are undefined on entry to the parallel region.

I also think your sharing of exception_occurred is a little fishy since it suggests that an exception on any thread should be noticed by any thread, not just the thread in which it is noticed. Of course, that may be your intent.

Cheers

Mark





相关问题
OutOfMemoryException on MemoryStream writing

I have a little sample application I was working on trying to get some of the new .Net 4.0 Parallel Extensions going (they are very nice). I m running into a (probably really stupid) problem with an ...

Master-Slave Pattern for Distributed Environment

Currently we have a batch driven process at work which runs every 15 mins and everytime it runs it repeats this cycle several times: Calls a sproc and get some data back from the DB Process the data ...

How to use database server for distributed job scheduling?

I have around 100 computers and few workers on each of them. The already connect to a central database to query for job parameters. Now I have to do job scheduling for them. One job for one worker ...

minimum work size of a goroutine [closed]

Does anyone know approximately what the minimum work size is needed in order for a goroutine to be beneficial (assuming that there are free cores for the work to be offloaded to)?

Optimal number of threads per core

Let s say I have a 4-core CPU, and I want to run some process in the minimum amount of time. The process is ideally parallelizable, so I can run chunks of it on an infinite number of threads and each ...

What s the quickest way to parallelize code?

I have an image processing routine that I believe could be made very parallel very quickly. Each pixel needs to have roughly 2k operations done on it in a way that doesn t depend on the operations ...

how to efficiently apply a medium-weight function in parallel

I m looking to map a modestly-expensive function onto a large lazy seq in parallel. pmap is great but i m loosing to much to context switching. I think I need to increase the size of the chunk of work ...

热门标签