English 中文(简体)
大规模开展每一单一项目的多次行动的最佳方式
原标题:Best way to perform multiple time taking operations for every single item in a large array

I have a function in my javascript code that loops through an array and performs some time-taking actions on each item of the array. It works fine for now when the number of items is low in the array but I also want the code to work if the the array is larger. Here is my function:

const fetchAndProcessNews = async (queryString, from) => {
  const query = {
    queryString,
    from,
    size: 1,
  }
  try {
    console.log( Fetching news... )
    const { articles } = await searchApi.getNews(query)
    console.log( total articles fetched: , articles.length)
    console.log( Fetched news: , articles)
    if (articles && articles.length > 0) {
      console.log( Processing news... )
      //looping through all the articles fetched from api
      for (const article of articles) {
        console.log( Processing article with name:  , article.title)
        const { title, sourceUrl, id, publishedAt } = article
        //scraping content from the source url and returning the markup of the single article
        const markup = await scraper(sourceUrl)
        //using gpt to perform some tasks on the markup returned from scrapping
        const data = await askGpt(markup)
        //using dall e to generate an image
        const generatedImageUrl = await generateImg(data?.imageDescription)
        //downloading the image from the url and uploading it to s3
        const s3ImageUrl = await generateImgUrl(generatedImageUrl, title, id)
        //uploading the article to strapi using post request
        const newTitle = data?.title
        const newMarkup = data?.content
        const description = data?.abstract
        const categories = data?.categories

        console.log( pushing article to strapi )
        await createPost(
          newTitle,
          description,
          newMarkup,
          s3ImageUrl,
          publishedAt,
          categories
        )
        console.log( article processsing completed... )
      }
    } else {
      console.log( No articles found )
    }
  } catch (error) {
    console.error( Error fetching news: , error.message)
  }
}

请允许我解释一下,我正在做些什么,一看一看一幅新闻文章,就我履行这些任务的每一条来说:

  1. scrape the content using a URL provided by the api using Cheerio which takes some time
  2. use open AI to perform some tasks on the markup which also takes a lot of time.
  3. generate an image using dall e which also takes time
  4. then i upload the image to s3
  5. then I upload all the things to strapi using post-request

现在,我担心的是,条款数目是否允许100或1000条,这一法典如何运作? 它将能够处理所有这些耗费时间的任务? 我如何能够使其更加优化,使之不失事和正常工作? 我没有这么多的经验,为什么我感到担忧。 我应当使用哪些技术? 我是否应当使用一些 que子,如bul子或批量处理? 如果有人能够提供详细答案,那将是一个很大的帮助。

问题回答

想法是,在着手处理下一个条款之前,不要等所有处理条款的步骤。

You could for instance start with a next article at each passing of 100 milliseconds. You could even start them all without delay, but the risk is that you may make too many requests to the same server, hitting some server limit. So it is probably more prudent to have a slight delay between the initiations of article processing. To have such intermediate delay, you could use this general purpose function:

const delay = ms => new Promise(resolve => setTimeout(resolve, ms));

实现总体设想的最容易的方法是将处理条款的守则单独运作:

const processArticle = async (article) = {
    console.log( Processing article with name:  , article.title)
    const { title, sourceUrl, id, publishedAt } = article
    //scraping content from the source url and returning the markup of the single article
    const markup = await scraper(sourceUrl)
    //using gpt to perform some tasks on the markup returned from scrapping
    const data = await askGpt(markup)
    //using dall e to generate an image
    const generatedImageUrl = await generateImg(data?.imageDescription)
    //downloading the image from the url and uploading it to s3
    const s3ImageUrl = await generateImgUrl(generatedImageUrl, title, id)
    //uploading the article to strapi using post request
    const newTitle = data?.title
    const newMarkup = data?.content
    const description = data?.abstract
    const categories = data?.categories

    console.log( pushing article to strapi )
    await createPost(
      newTitle,
      description,
      newMarkup,
      s3ImageUrl,
      publishedAt,
      categories
    )
    console.log( article processing completed... )
};

这里没有修改任何法典;这只是一项职能。

现在,你的主要职能可以履行上述职能,无须等待<>。 相反,它能够兑现其回报的承诺(这一承诺将待决),并在一阵列中收集这种承诺。 这意味着现在将提出若干关于不同条款的请求,而不必等待答复。 最后,你可能要等待这些承诺的兑现。

您的原始职能将如何考虑:

const fetchAndProcessNews = async (queryString, from) => {
  const query = {
    queryString,
    from,
    size: 1,
  }
  try {
    console.log( Fetching news... )
    const { articles } = await searchApi.getNews(query)
    console.log( total articles fetched: , articles.length)
    console.log( Fetched news: , articles)
    if (articles && articles.length > 0) {
      console.log( Processing news... )
      // looping through all the articles fetched from api
      const promises = [];
      for (const article of articles) {
        promises.push(processArticle(article)); // We don t await!
        await delay(100); // Determine which delay is suitable
      }
      // All articles are now being processed; wait for all to finish (optional)
      await promise.allSettled(promises);
    } else {
      console.log( No articles found )
    }
  } catch (error) {
    console.error( Error fetching news: , error.message)
  }
}

<代码>await Promise.allSettled<>/code>的操作是任择的,但对于<代码>fetchAndProcessNews的打电话者来说,是有助益的,因为如果做所有工作,他们获得的允诺将只会得到解决。

最后,您可能希望改进<代码>console.log在process上的产出,因为现在这些产出将会消失。 了解哪一条电文“将文章引向平方”和“已完成的物品处理”。

One might try both ...

  • 将作为报废物品的超常/递延后数据计算方法与重载计算后数据

  • 并同时进行数据后计算和测验。

一种可能的解决办法首先执行一种功能,计算任何拟报废的后数据。

http://unfccc.mozilla.org/en-US/docs/ Web/Java/en/Java/content_Ohrc/

One does create batches of parallel deferred post-data calculations by spliceing N article items at time from an array of to be scraped articles (the ones initially derived from the first api-call).

只要在不断变动的逐条记录中都有物品,作为合成物的生成者就会产生所有<><>条码>的许诺。

>>> 说明提供了一系列已解决的后数据。 因此,现在可以尝试把邮局电话同声相提并论,并创造所有电话的希望,每次都通过相关的<条码>。

async function getDeferredArticlePostData(article) {
  const { title, sourceUrl, id, publishedAt } = article;

  console.log(`Get post-data of article "${ title }".`);

  // nothing here which could be further parallelized.
  const markup = await scraper(sourceUrl);
  const data = await askGpt(markup, title); // cheating `title` into it for demonstration purpose.

  const generatedImageUrl = await generateImg(data?.imageDescription);
  const s3ImageUrl = await generateImgUrl(generatedImageUrl, title, id);

  // the post-data derieved from a scraped article.
  return [
    data?.title, data?.abstract, data?.content,
    s3ImageUrl, publishedAt, data?.categories,
  ];
}

async function* createDeferredBatchesOfParallelRetrievedArticlePostData(
  articleList = [], batchSize = 4, // play with the batch size.
) {
  while (articleList.length >= 1) {

    // - try to parallelize the calculation of article post-data by creating
    //   `Promise.all` based batches of deferred post-data calculations.

    yield await Promise.all(
      articleList.splice(0, batchSize).map(getDeferredArticlePostData)
    );
  }
}

async function fetchAndProcessNews(queryString, from) {
  const query = { queryString, from, size: 1 };

  try {
    const fetchTime = Date.now();
    console.log( ... start fetching and processing ... );

    const { articles } = await searchApi.getNews(query);

    if (articles && articles.length > 0) {

      const deferredPostDataPool =
        createDeferredBatchesOfParallelRetrievedArticlePostData([...articles]);

      // ... separate the (parallelized and batched) post-data calculation ...

      for await (const listOfResolvedPostData of deferredPostDataPool) {

        const batchTime = Date.now();
        console.log("... next batch ...");

        // ... from posting an article s new data;
        //     but here too, trying to parallelize the post api calls.

        // - one should give it a try playing around with delaying a bit each
        //   batch of post-data uploads in case one runs into API blocking issues. 
        //
        // await new Promise(resolve => setTimeout(resolve, 500)); // waits for 500 msec.

        // - one also intentionally does omit the awaiting of `Promis.all`.
        //
        // - but one might change it back to `await` and notice the difference.

        /* await */Promise.all(
          listOfResolvedPostData.map(postData => createPost(...postData))
        );
        console.log(`... batch process time ... ${ Date.now() - batchTime } msec ...`);
        console.log(`... fetch process time ... ${ Date.now() - fetchTime } msec ...`);
      }
      console.log(`... fetch process total ... ${ Date.now() - fetchTime } msec ...`);

    } else {
      console.log( No articles found. );
    }
  } catch (error) {
    console.error( Error while fetching news: , error.message);
  }
}
fetchAndProcessNews();
.as-console-wrapper { min-height: 100%!important; top: 0; }
<script>
// mocking an entire testable environment.

const scraper = async (url) =>
  new Promise(resolve =>
    setTimeout(resolve, (Math.random() * 500),  <markup/> )
  );
  
const askGpt = async (markup, mockedTitle) =>
  new Promise(resolve =>
    setTimeout(resolve, (Math.random() * 500), {
      imageDescription:  foo bar ,
      title: mockedTitle,
      abstract:  TLDR ,
      content:  Lorem ipsum dolor sit amet ,
      categories: [ asynchronous ,  performance ,  test ],
    })
  );

const generateImg = async (description) => {
  const name = `${ description.split(/s+/).join( - ).replace(/[^w-]/g,   ) }.jpg`;

  return new Promise(resolve => setTimeout(resolve, (Math.random() * 500), name));
};
const generateImgUrl = async (name, title, id) => {
  title = title.split(/s+/).join( - ).replace(/[^w-]/g,   );

  return new Promise(resolve =>
    setTimeout(resolve, (Math.random() * 500), `s3/bucket/images/${ id }/${ title }/${ name }`)
  );
};

const createPost = async (...args) => {
  console.log(`Posting scraped article data from arguments ... ["${ args.join( ", " ) }"]`);

  return new Promise(resolve =>
    setTimeout(() => {

      console.log(`Article data posted successfully for title "${ args[0] }"`);
      resolve({ ok: true });

    }, (Math.random() * 2000))
  );
};


const searchApi = {
  getNews: async (query) =>
    new Promise(resolve =>
      setTimeout(resolve, (Math.random() * 1500), {
        articles: [{
          title:  01) Lorem Ipsum ,
          sourceUrl:  foo/bar/baz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  02) Dolor sit amet ,
          sourceUrl:  biz/buzz/booz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  03) Lorem Ipsum ,
          sourceUrl:  foo/bar/baz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  04) Dolor sit amet ,
          sourceUrl:  biz/buzz/booz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  05) Lorem Ipsum ,
          sourceUrl:  foo/bar/baz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  06) Dolor sit amet ,
          sourceUrl:  biz/buzz/booz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  07) Lorem Ipsum ,
          sourceUrl:  foo/bar/baz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  08) Dolor sit amet ,
          sourceUrl:  biz/buzz/booz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  09) Lorem Ipsum ,
          sourceUrl:  foo/bar/baz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  10) Dolor sit amet ,
          sourceUrl:  biz/buzz/booz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  11) Lorem Ipsum ,
          sourceUrl:  foo/bar/baz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  12) Dolor sit amet ,
          sourceUrl:  biz/buzz/booz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }, {
          title:  13) Lorem Ipsum ,
          sourceUrl:  foo/bar/baz ,
          id: crypto?.randomUUID?.(),
          publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
        }],
      })
    ),
};
</script>

<><>>>>>

An even better solution would entirely decouple the scraping from the post-data upload. The code which consumes the async generator then needs to be changed into feeding each batch of resolved post-data arguments into a queue. The latter then needs to be implemented in a way that it is aware of post-data items getting enqueued and that it continuously keeps uploading post-data items in an efficient (parallel uploads) way as long as there are still items present.

<><>Edit>/strong>

The next provided example code implements the above mentioned queue-based approach. It does so by providing a generic abstraction of what one could call an "Async Task Execution Queue" which can be used by both of the decoupled overall tasks, the scraping of all articles and the posting of all calculated, article related parameters. The price for achieving a strong decoupling is payed with the usage of event-dispatching. Thus, the AsyncTaskExecutionQueue extends EventTarget.

执行部分第6段(a)分段: 新的实施<代码>fetchAndProcessNews表明,尽管两个问题都通过事件的发送得到了有力的协调,但如何实现这种行为。

主要技术是......

  • making use of Promise.withResolvers,
  • updating and counting the various task totals and comparing them to the total article count,

......两者都使一个清洁(er)合成物能够返还一个可视前者(resolve<<>>>)正确任务要求而予以解决或驳回的允诺,以及前文提及的后一种重大错误(reject)。

// implementation of just the business/problem specifc logic.

async function getArticlePostParametersDeferred(article) {
  const { title, sourceUrl, id, publishedAt } = article;

  console.log(`Get post-data of article "${ title }".`);

  // nothing here which could be further parallelized.
  const markup = await scraper(sourceUrl);
  const data = await askGpt(markup, title); // cheating `title` into it for demonstration purpose.

  const generatedImageUrl = await generateImg(data?.imageDescription);
  const s3ImageUrl = await generateImgUrl(generatedImageUrl, title, id);

  // the post-data derieved from a scraped article.
  return [
    data?.title, data?.abstract, data?.content,
    s3ImageUrl, publishedAt, data?.categories,
  ];
}

async function fetchAndProcessNews(queryString, from) {
  // implemented similar to `Promise.allSettled`.
  const { promise, resolve, reject } = Promise.withResolvers();

  const allSettledData = {
    scraping: { resolved: [], rejected: [] },
    posting: { resolved: [], rejected: [] },
  };
  let articleCount = -1;

  function settleAllIfPossible() {
    const {
      scraping: {
        resolved: { length: scrapingResolvedCount },
        rejected: { length: scrapingRejectedCount },
      },
      posting: {
        resolved: { length: postingResolvedCount },
        rejected: { length: postingRejectedCount },
      },
    } = allSettledData;

    if (
      (scrapingResolvedCount + scrapingRejectedCount === articleCount) &&
      (postingResolvedCount + postingRejectedCount === articleCount)
    ) {
      resolve(allSettledData);
    }
  }
  // play with the queues  batch sizes.

  // - e.g. batch size of 3 for post requests.
  const postTaskQueue = new AsyncTaskExecutionQueue(3);   // 4
  // - e.g. batch size of 5 for scraping requests.
  const scrapeTaskQueue = new AsyncTaskExecutionQueue(5); // 4

  postTaskQueue
    .addEventListener( rejected , ({ detail: { reasons: postFailureList } }) => {

      allSettledData.posting.rejected.push(...postFailureList);
      console.log({ postFailureList });

      settleAllIfPossible();
    });
  postTaskQueue
    .addEventListener( resolved , ({ detail: { values: postResponseList } }) => {

      allSettledData.posting.resolved.push(...postResponseList);
      console.log({ postResponseList });

      settleAllIfPossible();
    });

  scrapeTaskQueue
    .addEventListener( rejected , ({ detail: { reasons: scrapeFailureList } }) => {

      allSettledData.scraping.rejected.push(...scrapeFailureList);
      console.log({ scrapeFailureList });

      settleAllIfPossible();
    });
  scrapeTaskQueue
    .addEventListener( resolved , ({ detail: { values: postParamsList } }) => {

      postTaskQueue
        .enqueue(
          postParamsList
            .map(postParams => ({
              createAsynTask: createPost,
              params: postParams,
            }))
        );

      allSettledData.scraping.resolved.push(...postParamsList);
      // console.log({ postParamsList });

      settleAllIfPossible();
    });

  try {
    const { articles } = await searchApi.getNews({ queryString, from, size: 1 });

    if (articles) {
      articleCount = articles.length;

      if (articleCount > 0) {

        scrapeTaskQueue
          .enqueue(
            [...articles]
              .map(article => ({
                createAsynTask: getArticlePostParametersDeferred,
                params: [article],
              }))
          );

      } else {
        reject( No articles found. );
      }
    } else {
      reject( Internal error. );
    }
  } catch (reason) {

    reject(reason);
  }
  return promise;
}


(async () => {
  console.log( ... start fetching and processing ... );
  const timestamp = Date.now();

  try {
    const allSettledData = await fetchAndProcessNews();

    console.log(`... total fetching and processing time ... ${ Date.now() - timestamp } msec ...`);
    console.log({ allSettledData });

  } catch (reason) {

    console.log(`... total fetching and processing time ... ${ Date.now() - timestamp } msec ...`);
    console.error({ reason });
  }
})();
.as-console-wrapper { min-height: 100%!important; top: 0; }
<script>
  // helper/utility funtionality.

  function isFunction(value) {
    return (
      (typeof value ===  function ) &&
      (typeof value.call ===  function ) &&
      (typeof value.apply ===  function )
    );
  }
  function isAsynFunction(value) {
    return (/[objects+AsyncFunction]/).test(
      Object.prototype.toString.call(value)
    );
  }

  function throttle(proceed, threshold = 200, target) {
    let timeoutId = null;
    let referenceTime = 0;

    return (isAsynFunction(proceed) && async function throttled(...args) {
      const currentTime = Date.now();

      if (currentTime - referenceTime >= threshold) {
        clearTimeout(timeoutId);

        referenceTime = currentTime;
        const trigger = proceed.bind((target ?? this), ...args);

        timeoutId = setTimeout((() => {

          referenceTime = 0;
          trigger();

        }), threshold);

        trigger();
      }
    }) || (isFunction(proceed) && function throttled(...args) {
      const currentTime = Date.now();

      if (currentTime - referenceTime >= threshold) {
        clearTimeout(timeoutId);

        referenceTime = currentTime;
        const trigger = proceed.bind((target ?? this), ...args);

        timeoutId = setTimeout((() => {

          referenceTime = 0;
          trigger();

        }), threshold);

        trigger();
      }
    }) || proceed;
  }

  function getValuesAndReasons(settledItems) {
    return settledItems
      .reduce(({ values, reasons }, { value, reason = null }) => {

        if (reason !== null) {
          reasons.push(reason);
        } else {
          values.push(value);
        }
        return { values, reasons };

      }, { values: [], reasons: [] });
  }
</script>

<script>
  // abstraction of an "Async Task Execution Queue".

  async function* createAsyncTaskBatches(taskComponentsList, batchSize) {
    while (taskComponentsList.length >= 1) {

      // - try to parallelize the execution of asynchronous tasks by creating
      //   `Promise.allSettled` based arrays of deferred task execution values.

      yield await Promise.allSettled(
        taskComponentsList
          .splice(0, batchSize)
          .map(({ createAsynTask, params }) => createAsynTask(...params))
      );
    }
  }

  async function createExecuteAndNotifyAsyncTasks() {
    const { queue, taskComponentsList, batchSize } = this;

    const deferredResultsPool =
      createAsyncTaskBatches(taskComponentsList, batchSize);

    for await (const settledItems of deferredResultsPool) {
      // console.log({ settledItems });

      queue
        .dispatchEvent(
          new CustomEvent( settled , { detail: { results: settledItems } })
        );

      const { values, reasons } = getValuesAndReasons(settledItems);

      if (values.length > 0) {
        queue
          .dispatchEvent(
            new CustomEvent( resolved , { detail: { values } })
          );
      }
      if (reasons.length > 0) {
        queue
          .dispatchEvent(
            new CustomEvent( rejected , { detail: { reasons } })
          );
      }
    }
  }

  class AsyncTaskExecutionQueue extends EventTarget {
    #createExecuteAndNotify;
    #taskComponentsList = [];

    constructor(batchSize = 4, throttleThreshold = 0) {
      super();

      this.#createExecuteAndNotify = throttle(
        createExecuteAndNotifyAsyncTasks.bind({
          queue: this, taskComponentsList: this.#taskComponentsList, batchSize,
        }),
        throttleThreshold,
      );
    }
    enqueue(...args) {
      const taskComponents = args
        .flat()
        .filter(({ createAsynTask, params }) =>
          isFunction(createAsynTask) && Array.isArray(params)
        );

      if (taskComponents.length > 0) {
        this.#taskComponentsList.push(...taskComponents);
        this.#createExecuteAndNotify();
      }
    }  
  }
</script>

<script>
  // mocking an entire api environment.

  const scraper = async (url) =>
    new Promise(resolve =>
      setTimeout(resolve, (Math.random() * 1500),  <markup/> )
    );

  const askGpt = async (markup, mockedTitle) =>
    new Promise(resolve =>
      setTimeout(resolve, (Math.random() * 1500), {
        imageDescription:  foo bar ,
        title: mockedTitle,
        abstract:  TLDR ,
        content:  Lorem ipsum dolor sit amet ,
        categories: [ asynchronous ,  performance ,  test ],
      })
    );

  const generateImg = async (description) => {
    const name = `${ description.split(/s+/).join( - ).replace(/[^w-]/g,   ) }.jpg`;

    return new Promise(resolve => setTimeout(resolve, (Math.random() * 500), name));
  };
  const generateImgUrl = async (name, title, id) => {
    title = title.split(/s+/).join( - ).replace(/[^w-]/g,   );

    return new Promise(resolve =>
      setTimeout(resolve, (Math.random() * 500), `s3/bucket/images/${ id }/${ title }/${ name }`)
    );
  };

  const createPost = async (...args) => {
    console.log(`Posting scraped article data from arguments ... ["${ args.join( ", " ) }"]`);

    return new Promise(resolve =>
      setTimeout(resolve, (Math.random() * 1500), { title: args[0] })
    );
  };

  const searchApi = {
    getNews: async (query) =>
      new Promise(resolve =>
        setTimeout(resolve, (Math.random() * 1500), {
        //articles: [],
          articles: [{
            title:  01) Lorem Ipsum ,
            sourceUrl:  foo/bar/baz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  02) Dolor sit amet ,
            sourceUrl:  biz/buzz/booz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  03) Lorem Ipsum ,
            sourceUrl:  foo/bar/baz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  04) Dolor sit amet ,
            sourceUrl:  biz/buzz/booz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  05) Lorem Ipsum ,
            sourceUrl:  foo/bar/baz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  06) Dolor sit amet ,
            sourceUrl:  biz/buzz/booz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  07) Lorem Ipsum ,
            sourceUrl:  foo/bar/baz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  08) Dolor sit amet ,
            sourceUrl:  biz/buzz/booz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  09) Lorem Ipsum ,
            sourceUrl:  foo/bar/baz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  10) Dolor sit amet ,
            sourceUrl:  biz/buzz/booz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  11) Lorem Ipsum ,
            sourceUrl:  foo/bar/baz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  12) Dolor sit amet ,
            sourceUrl:  biz/buzz/booz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }, {
            title:  13) Lorem Ipsum ,
            sourceUrl:  foo/bar/baz ,
            id: crypto?.randomUUID?.(),
            publishedAt: new Date(Date.now() - (Math.random() * 14*24*3_600_000)).toUTCString(),
          }],
        })
      ),
  };
</script>




相关问题
selected text in iframe

How to get a selected text inside a iframe. I my page i m having a iframe which is editable true. So how can i get the selected text in that iframe.

How to fire event handlers on the link using javascript

I would like to click a link in my page using javascript. I would like to Fire event handlers on the link without navigating. How can this be done? This has to work both in firefox and Internet ...

How to Add script codes before the </body> tag ASP.NET

Heres the problem, In Masterpage, the google analytics code were pasted before the end of body tag. In ASPX page, I need to generate a script (google addItem tracker) using codebehind ClientScript ...

Clipboard access using Javascript - sans Flash?

Is there a reliable way to access the client machine s clipboard using Javascript? I continue to run into permissions issues when attempting to do this. How does Google Docs do this? Do they use ...

javascript debugging question

I have a large javascript which I didn t write but I need to use it and I m slowely going trough it trying to figure out what does it do and how, I m using alert to print out what it does but now I ...

Parsing date like twitter

I ve made a little forum and I want parse the date on newest posts like twitter, you know "posted 40 minutes ago ","posted 1 hour ago"... What s the best way ? Thanx.