我正在寻找最简明的法典,可以编为万国邮联(使用g++)和万国邮联(使用Nvcc)制定,而万国邮联始终不为之效。 任何类型的算法都是可以接受的。
澄清: 我在字面上看着两个简短的代码,一个是万国邮联(使用C++),另一个是万国邮联(使用C++)的代码。 最好是以二或二流的规模。 最短的密码是可能的。
我正在寻找最简明的法典,可以编为万国邮联(使用g++)和万国邮联(使用Nvcc)制定,而万国邮联始终不为之效。 任何类型的算法都是可以接受的。
澄清: 我在字面上看着两个简短的代码,一个是万国邮联(使用C++),另一个是万国邮联(使用C++)的代码。 最好是以二或二流的规模。 最短的密码是可能的。
首先,我要重申我的意见: 邮联是高带宽、高纬度的。 试图让万国邮联从事无二的工作(甚至从事非二或二类工作),完全没有万国邮联做的足迹。 下面是一些简单的法典,但为了真正了解万国邮联的绩效利益,你需要一个大问题的规模来摊销启动费用,而不是......,这是毫无意义的。 我可以分两个步行,仅仅因为需要一些时间来改造钥匙、启动发动机和推进 pe。 这并不意味着以任何有意义的方式比费拉里更快。
在C++中使用类似内容:
#define N (1024*1024)
#define M (1000000)
int main()
{
float data[N]; int count = 0;
for(int i = 0; i < N; i++)
{
data[i] = 1.0f * i / N;
for(int j = 0; j < M; j++)
{
data[i] = data[i] * data[i] - 0.25f;
}
}
int sel;
printf("Enter an index: ");
scanf("%d", &sel);
printf("data[%d] = %f
", sel, data[sel]);
}
在《世界人权宣言》/《公约》中使用类似内容:
#define N (1024*1024)
#define M (1000000)
__global__ void cudakernel(float *buf)
{
int i = threadIdx.x + blockIdx.x * blockDim.x;
buf[i] = 1.0f * i / N;
for(int j = 0; j < M; j++)
buf[i] = buf[i] * buf[i] - 0.25f;
}
int main()
{
float data[N]; int count = 0;
float *d_data;
cudaMalloc(&d_data, N * sizeof(float));
cudakernel<<<N/256, 256>>>(d_data);
cudaMemcpy(data, d_data, N * sizeof(float), cudaMemcpyDeviceToHost);
cudaFree(d_data);
int sel;
printf("Enter an index: ");
scanf("%d", &sel);
printf("data[%d] = %f
", sel, data[sel]);
}
如果做不到工作,则试图使N和M更大,或将256改为128或512。
一个非常简单的方法是计算首批10万个分类账或大型矩阵操作的广场。 通过避免分行、不要求分门别类,很容易执行并赋予政府权力。 我在《开放式组织法》诉C++一案中这样做,同时取得了一些令人tty慕的成果。 (A 2GB G Sk460实现了约40x小时dual core CPU)
你们是否考虑过例如法典或只是想法?
<><>Edit>/strong>
40x 相对于双重核心组,不是核心组。
一些要点:
正如我在对@Paul R所作的评论中所说的那样,考虑使用开放式法律,因为这样做很容易让你在万国邮联和万国邮联上实施同样的守则,而不必加以执行。
(这在回顾中可能非常明显。)
例如,我用时间测量得出了类似的例子。 通用公平市价为660欧元,除了实际计算外,其运行还包括数据转让。
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <stdio.h>
#include <time.h>
#define N (1024*1024)
#define M (10000)
#define THREADS_PER_BLOCK 1024
void serial_add(double *a, double *b, double *c, int n, int m)
{
for(int index=0;index<n;index++)
{
for(int j=0;j<m;j++)
{
c[index] = a[index]*a[index] + b[index]*b[index];
}
}
}
__global__ void vector_add(double *a, double *b, double *c)
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
for(int j=0;j<M;j++)
{
c[index] = a[index]*a[index] + b[index]*b[index];
}
}
int main()
{
clock_t start,end;
double *a, *b, *c;
int size = N * sizeof( double );
a = (double *)malloc( size );
b = (double *)malloc( size );
c = (double *)malloc( size );
for( int i = 0; i < N; i++ )
{
a[i] = b[i] = i;
c[i] = 0;
}
start = clock();
serial_add(a, b, c, N, M);
printf( "c[0] = %d
",0,c[0] );
printf( "c[%d] = %d
",N-1, c[N-1] );
end = clock();
float time1 = ((float)(end-start))/CLOCKS_PER_SEC;
printf("Serial: %f seconds
",time1);
start = clock();
double *d_a, *d_b, *d_c;
cudaMalloc( (void **) &d_a, size );
cudaMalloc( (void **) &d_b, size );
cudaMalloc( (void **) &d_c, size );
cudaMemcpy( d_a, a, size, cudaMemcpyHostToDevice );
cudaMemcpy( d_b, b, size, cudaMemcpyHostToDevice );
vector_add<<< (N + (THREADS_PER_BLOCK-1)) / THREADS_PER_BLOCK, THREADS_PER_BLOCK >>>( d_a, d_b, d_c );
cudaMemcpy( c, d_c, size, cudaMemcpyDeviceToHost );
printf( "c[0] = %d
",0,c[0] );
printf( "c[%d] = %d
",N-1, c[N-1] );
free(a);
free(b);
free(c);
cudaFree( d_a );
cudaFree( d_b );
cudaFree( d_c );
end = clock();
float time2 = ((float)(end-start))/CLOCKS_PER_SEC;
printf("CUDA: %f seconds, Speedup: %f
",time2, time1/time2);
return 0;
}
我赞同戴维斯关于开放签署公约是检验这一状况的一条大方法的意见,因为从《万国邮联诉万国邮联》的手法转向《万国邮联》。 如果你能够就麦克风开展工作, Apple果拥有一个样本代码的冰块,该样本代码为rel=“nofollow noreferer” 利用开放式CL进行N-人模拟,车载在万国邮联、万国邮联或两者中。 你们可以实时交换,而科索沃警察部队的计票则被放映。
更简单的例子有: 。 如果不作出很大努力,这很可能被送到非马茨平台。 为了在万国邮联和万国邮联的使用之间转用,我认为,你刚刚需要改变。
int gpu = 1;
CPU, 1 GS
David Gohara博士在
I ve also done some tinkering in the mobile space using OpenGL ES shaders to perform rudimentary calculations. I found that a simple color thresholding shader run across an image was roughly 14-28X faster when run as a shader on the GPU than the same calculation performed on the CPU for this particular device.
There has been a significant shift towards data-parallel programming via systems like OpenCL and CUDA over the last few years, and yet books published even within the last six months never even ...
I am trying to integrate CUDA and openCV in a project. Problem is openCV won t compile when NVCC is used, while a normal c++ project compiles just fine. This seems odd to me, as I thought NVCC ...
I need help please. I started to program a common brute forcer / password guesser with CUDA (2.3 / 3.0beta). I tried different ways to generate all possible plain text "candidates" of a defined ASCII ...
I was stepping through some C/CUDA code in the debugger, something like: for(uint i = threadIdx.x; i < 8379; i+=256) sum += d_PartialHistograms[blockIdx.x + i * HISTOGRAM64_BIN_COUNT]; And I ...
I m getting this error while trying to run sample codes in CUDA SDK. I have CUDA 2.3 and Visual studio 2008 LINK : fatal error LNK1181: cannot open input file cutil32D.lib Any pointers how to ...
My laptop doesn t have a nVidia graphic cards, and I want to work on CUDA. The website says that CUDA can be used in emulation mode on non-cuda hardware too. But when I tried installing CUDA drivers ...
I m trying to figure out a way to allocate a block of memory that is accessible by both the host (CPU) and device (GPU). Other than using cudaHostAlloc() function to allocate page-locked memory that ...
I have posted my problem in the CUDA forums, but not sure if it s appropriate to post a link here for more ideas in case there are significant number of different audiences between the two forums. The ...